A Review of Research on SLAM Technology Based on the Fusion of LiDAR and Vision

被引:0
作者
Chen, Peng [1 ]
Zhao, Xinyu [1 ]
Zeng, Lina [1 ,2 ,3 ]
Liu, Luxinyu [1 ]
Liu, Shengjie [1 ,2 ,3 ]
Sun, Li [1 ,2 ,3 ]
Li, Zaijin [1 ,2 ,3 ]
Chen, Hao [1 ,2 ]
Liu, Guojun [1 ,2 ,3 ]
Qiao, Zhongliang [1 ,2 ,3 ]
Qu, Yi [1 ,2 ,3 ]
Xu, Dongxin [2 ]
Li, Lianhe [3 ]
Li, Lin [1 ,2 ,3 ]
机构
[1] Hainan Normal Univ, Coll Phys & Elect Engn, Haikou 571158, Peoples R China
[2] Hainan Normal Univ, Key Lab Laser Technol & Optoelect Funct Mat Hainan, Haikou 571158, Peoples R China
[3] Hainan Normal Univ, Hainan Int Joint Res Ctr Semicond Lasers, Haikou 571158, Peoples R China
基金
中国国家自然科学基金; 海南省自然科学基金;
关键词
LiDAR; vision sensors; SLAM; data fusion; autonomous navigation; dynamic environments; deep learning; semantic information; VISUAL SLAM; TRACKING; NAVIGATION;
D O I
10.3390/s25051447
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
In recent years, simultaneous localization and mapping with the fusion of LiDAR and vision fusion has gained extensive attention in the field of autonomous navigation and environment sensing. However, its limitations in feature-scarce (low-texture, repetitive structure) environmental scenarios and dynamic environments have prompted researchers to investigate the use of combining LiDAR with other sensors, particularly the effective fusion with vision sensors. This technique has proven to be highly effective in handling a variety of situations by fusing deep learning with adaptive algorithms. LiDAR excels in complex environments, with its ability to acquire high-precision 3D spatial information, especially when dealing with complex and dynamic environments with high reliability. This paper analyzes the research status, including the main research results and findings, of the early single-sensor SLAM technology and the current stage of LiDAR and vision fusion SLAM. Specific solutions for current problems (complexity of data fusion, computational burden and real-time performance, multi-scenario data processing, etc.) are examined by categorizing and summarizing the body of the extant literature and, at the same time, discussing the trends and limitations of the current research by categorizing and summarizing the existing literature, as well as looks forward to the future research directions, including multi-sensor fusion, optimization of algorithms, improvement of real-time performance, and expansion of application scenarios. This review aims to provide guidelines and insights for the development of SLAM technology for LiDAR and vision fusion, with a view to providing a reference for further SLAM technology research.
引用
收藏
页数:24
相关论文
共 99 条
  • [1] Merzlyakov A., Macenski S., A Comparison of Modern General-Purpose Visual SLAM Approaches, Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 9190-9197
  • [2] Lu Y., Song D., Visual Navigation Using Heterogeneous Landmarks and Unsupervised Geometric Constraints, IEEE Trans. Robot, 31, pp. 736-749, (2015)
  • [3] Yang S., Scherer S., CubeSLAM: Monocular 3-D Object SLAM, IEEE Trans. Robotics, 35, pp. 925-938, (2019)
  • [4] Qiu Y., Wang C., Wang W., Henein M., Scherer S., AirDOS: Dynamic SLAM Benefits from Articulated Objects, Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), pp. 8047-8053
  • [5] Shin Y.-S., Park Y.S., Kim A., Direct Visual SLAM Using Sparse Depth for Camera-LiDAR System, Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 5144-5151
  • [6] Droeschel D., Schwarz M., Behnke S., Continuous Mapping and Localization for Autonomous Navigation in Rough Terrain Using a 3D Laser Scanner, Robot. Auton. Syst, 88, pp. 104-115, (2017)
  • [7] Davison A.J., Reid I.D., Molton N.D., Stasse O., MonoSLAM: Real-Time Single Camera SLAM, IEEE Trans. Pattern Anal. Mach. Intell, 29, pp. 1052-1067, (2007)
  • [8] Cattaneo D., Vaghi M., Valada A., LCDNet: Deep Loop Closure Detection and Point Cloud Registration for LiDAR SLAM, IEEE Trans. Robot, 38, pp. 2074-2093, (2022)
  • [9] Endres F., Hess J., Engelhard N., Sturm J., Cremers D., Burgard W., An Evaluation of the RGB-D SLAM System, Proceedings of the 2012 IEEE International Conference on Robotics and Automation, pp. 1691-1696
  • [10] Shin Y.S., Park Y.S., Kim A., DVL-SLAM: Sparse Depth Enhanced Direct Visual-LiDAR SLAM, Auton. Robot, 44, pp. 115-130, (2020)