Adaptive Multi-Sensor Fusion for SLAM: A Scan Context-Driven Approach

被引:0
作者
Zhang, Yijing [1 ]
Liu, Jia [1 ,2 ]
Cao, Runxi [1 ]
Zhang, Yunxi [1 ,2 ]
机构
[1] Tianjin Univ Technol & Educ, Sch Automat & Elect Engn, Tianjin 300222, Peoples R China
[2] Tianjin Univ Technol & Educ, Tianjin Key Lab Informat Sensing & Intelligent Con, Tianjin 300222, Peoples R China
关键词
Laser radar; Feature extraction; Simultaneous localization and mapping; Visualization; Point cloud compression; Robot sensing systems; Robots; Accuracy; Cameras; Robustness; Scan context; SLAM; multi-sensor fusion; loop closure detection; descriptors; SIMULTANEOUS LOCALIZATION;
D O I
10.1109/ACCESS.2024.3523129
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper proposes a novel multi-sensor fusion SLAM algorithm, named SC-LVI-SAM, based on scanning context, to address the issues of decreased positioning accuracy caused by missing feature points and prolonged motion in complex large-scale scenes in multi-sensor fusion SLAM algorithms. Firstly, the scanning context method is used to preprocess LIDAR point cloud data, generating a descriptor of the environment. This descriptor, along with data from the IMU and vision sensors, is then used for state estimation, yielding initial pose estimates and motion information. Then, the scan context module uses the descriptor for environment recognition and loop closure detection, providing more accurate feature description and context information for fast loop closure matching. It avoids ignoring the spatial relationship and order between features due to the local feature description of DBoW2, and improves the accuracy and robustness of loop closure detection. Finally, global optimization is performed to correct accumulated errors in the entire trajectory and map. In KAIST02 and Riverside01 sequences of MulRan dataset, the root mean square error of the absolute pose error of the proposed method is reduced by 85.17% and 91.30% compared with LVI-SAM, respectively. Experimental results on multiple public benchmark datasets demonstrate that in the case of almost the same computational efficiency, the proposed algorithm effectively enhances the accuracy of positioning, the robustness of the algorithm and accuracy of mapping, improving the global consistency of the generated map.
引用
收藏
页码:149 / 159
页数:11
相关论文
共 18 条
[1]   Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age [J].
Cadena, Cesar ;
Carlone, Luca ;
Carrillo, Henry ;
Latif, Yasir ;
Scaramuzza, Davide ;
Neira, Jose ;
Reid, Ian ;
Leonard, John J. .
IEEE TRANSACTIONS ON ROBOTICS, 2016, 32 (06) :1309-1332
[2]   A Stereo Visual-Inertial SLAM Approach for Indoor Mobile Robots in Unknown Environments Without Occlusions [J].
Chen, Chang ;
Zhu, Hua ;
Wang, Lei ;
Liu, Yu .
IEEE ACCESS, 2019, 7 :185408-185421
[3]   Vision meets robotics: The KITTI dataset [J].
Geiger, A. ;
Lenz, P. ;
Stiller, C. ;
Urtasun, R. .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2013, 32 (11) :1231-1237
[4]  
Graeter J, 2018, IEEE INT C INT ROBOT, P7872, DOI 10.1109/IROS.2018.8594394
[5]   Performance Analysis of the Microsoft Kinect Sensor for 2D Simultaneous Localization and Mapping (SLAM) Techniques [J].
Kamarudin, Kamarulzaman ;
Mamduh, Syed Muhammad ;
Shakaff, Ali Yeon Md ;
Zakaria, Ammar .
SENSORS, 2014, 14 (12) :23365-23387
[6]   MulRan: Multimodal Range Dataset for Urban Place Recognition [J].
Kim, Giseop ;
Park, Yeong Sang ;
Cho, Younghun ;
Jeong, Jinyong ;
Kim, Ayoung .
2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2020, :6246-6253
[7]  
Kim G, 2018, IEEE INT C INT ROBOT, P4802, DOI 10.1109/IROS.2018.8593953
[8]  
Li Y. Qi, 2023, Infr. Laser Eng., V52, P135
[9]   Research on SLAM Algorithm of Mobile Robot Based on the Fusion of 2D LiDAR and Depth Camera [J].
Mu, Lili ;
Yao, Pantao ;
Zheng, Yuchen ;
Chen, Kai ;
Wang, Fangfang ;
Qi, Nana .
IEEE ACCESS, 2020, 8 :157628-157642
[10]   ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras [J].
Mur-Artal, Raul ;
Tardos, Juan D. .
IEEE TRANSACTIONS ON ROBOTICS, 2017, 33 (05) :1255-1262