A Scene Recognition Method of Autonomous Developmental Network Based on Multi-sensor Fusion

被引:0
|
作者
Yu H. [1 ,2 ]
Fang Y. [1 ,2 ]
Wei Z. [1 ,2 ]
机构
[1] Institute of Robotics and Automatic Information System, College of Artificial Intelligence, Nankai University, Tianjin
[2] Tianjin Key Laboratory of Intelligent Robotics, Nankai University, Tianjin
来源
Jiqiren/Robot | 2021年 / 43卷 / 06期
关键词
Autonomous developmental neural network; Multi-sensor fusion; Scene recognition;
D O I
10.13973/j.cnki.robot.200352
中图分类号
学科分类号
摘要
Considering the low accuracy and poor adaptability of the existing scene recognition methods, the autonomous developmental neural network is applied to the robot scene recognition task, and two scene recognition methods combining the autonomous developmental network and multi-sensor fusion are proposed, namely, the robot scene recognition method based on weighted Bayesian fusion, and the scene recognition method based on data fusion of the same autonomous developmental network architecture, where the multi-sensor information is fused in the decision-making layer and the data layer, respectively, so as to improve the accuracy of scene recognition. Meanwhile, the autonomous developmental network improves the adaptability of the recognition method for various complex scenes. The proposed scene recognition method is tested and analyzed, which proves its effectiveness and practicability. In addition, the proposed method achieves better accuracy in scene recognition due to more efficient use of collected data through data fusion in the same network architecture. © 2021, Science Press. All right reserved.
引用
收藏
页码:706 / 714
页数:8
相关论文
共 18 条
  • [1] Khaleghi B, Khamis A, Karray F O, Et al., Multisensor data fusion: A review of the state-of-the-art, Information Fusion, 14, 1, pp. 28-44, (2013)
  • [2] Anitha R, Renuka S, Abudhahir A., Multi sensor data fusion algorithms for target tracking using multiple measurements, IEEE International Conference on Computational Intelligence and Computing Research, pp. 1-4, (2013)
  • [3] Zhang Q, Liu Y, Blum R S, Et al., Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: A review, Information Fusion, 40, 40, pp. 57-75, (2018)
  • [4] Wan G W, Yang X L, Cai R L, Et al., Robust and precise vehicle localization based on multi-sensor fusion in diverse city scenes, IEEE International Conference on Robotics and Automation, pp. 4670-4677, (2018)
  • [5] Rakotovao T, Mottin J, Puschini D, Et al., Multi-sensor fusion of occupancy grids based on integer arithmetic, IEEE International Conference on Robotics and Automation, pp. 1854-1859, (2016)
  • [6] Liang M, Yang B, Chen Y, Et al., Multi-task multi-sensor fusion for 3D object detection, IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7337-7345, (2019)
  • [7] Liu M Y, Chen R Z, Li D R, Et al., Scene recognition for indoor localization using a multi-sensor fusion approach, Sensors, 17, 12, (2017)
  • [8] Zhu H Y, Weibel J-B, Lu S J., Discriminative multi-modal feature fusion for RGBD indoor scene recognition, IEEE Conference on Computer Vision and Pattern Recognition, pp. 2969-2976, (2016)
  • [9] Bijelic M, Gruber T, Mannan F, Et al., Seeing through fog without seeing fog: Deep multimodal sensor fusion in unseen adverse weather, IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11679-11689, (2020)
  • [10] Weng J Y, McClelland J, Pentland A, Et al., Autonomous mental development by robots and animals, Science, 291, 5504, pp. 599-600, (2001)