VILO SLAM: Tightly Coupled Binocular Vision-Inertia SLAM Combined with LiDAR

被引:7
作者
Peng, Gang [1 ,2 ]
Zhou, Yicheng [1 ,2 ]
Hu, Lu [1 ,2 ]
Xiao, Li [1 ,2 ]
Sun, Zhigang [1 ,2 ]
Wu, Zhangang [3 ]
Zhu, Xukang [3 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Artificial Intelligence & Automat, Wuhan 430074, Peoples R China
[2] Minist Educ, Key Lab Image Proc & Intelligent Control, Wuhan 430074, Peoples R China
[3] Shantui Construct Machinery Co Ltd, Jining 272073, Peoples R China
关键词
multi-sensor fusion; pose estimation; lidar; visual inertial system;
D O I
10.3390/s23104588
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
For the existing visual-inertial SLAM algorithm, when the robot is moving at a constant speed or purely rotating and encounters scenes with insufficient visual features, problems of low accuracy and poor robustness arise. Aiming to solve the problems of low accuracy and robustness of the visual inertial SLAM algorithm, a tightly coupled vision-IMU-2D lidar odometry (VILO) algorithm is proposed. Firstly, low-cost 2D lidar observations and visual-inertial observations are fused in a tightly coupled manner. Secondly, the low-cost 2D lidar odometry model is used to derive the Jacobian matrix of the lidar residual with respect to the state variable to be estimated, and the residual constraint equation of the vision-IMU-2D lidar is constructed. Thirdly, the nonlinear solution method is used to obtain the optimal robot pose, which solves the problem of how to fuse 2D lidar observations with visual-inertial information in a tightly coupled manner. The results show that the algorithm still has reliable pose-estimation accuracy and robustness in many special environments, and the position error and yaw angle error are greatly reduced. Our research improves the accuracy and robustness of the multi-sensor fusion SLAM algorithm.
引用
收藏
页数:16
相关论文
共 20 条
[1]   ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial, and Multimap SLAM [J].
Campos, Carlos ;
Elvira, Richard ;
Gomez Rodriguez, Juan J. ;
Montiel, Jose M. M. ;
Tardos, Juan D. .
IEEE TRANSACTIONS ON ROBOTICS, 2021, 37 (06) :1874-1890
[2]   Direct Sparse Odometry [J].
Engel, Jakob ;
Koltun, Vladlen ;
Cremers, Daniel .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (03) :611-625
[3]   LSD-SLAM: Large-Scale Direct Monocular SLAM [J].
Engel, Jakob ;
Schoeps, Thomas ;
Cremers, Daniel .
COMPUTER VISION - ECCV 2014, PT II, 2014, 8690 :834-849
[4]  
Forster C., 2015, Imu preintegration on manifold for efficient visual-inertial maximum-aposteriori estimation
[5]  
Forster C, 2014, IEEE INT CONF ROBOT, P15, DOI 10.1109/ICRA.2014.6906584
[6]   Nonlinear Filter for Simultaneous Localization and Mapping on a Matrix Lie Group Using IMU and Feature Measurements [J].
Hashim, Hashim A. ;
Eltoukhy, Abdelrahman E. E. .
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2022, 52 (04) :2098-2109
[7]  
Hess W, 2016, IEEE INT CONF ROBOT, P1271, DOI 10.1109/ICRA.2016.7487258
[8]   An Adaptive Augmented Vision-Based Ellipsoidal SLAM for Indoor Environments [J].
Lahemer, Elfituri S. ;
Rad, Ahmad .
SENSORS, 2019, 19 (12)
[9]   Keyframe-based visual-inertial odometry using nonlinear optimization [J].
Leutenegger, Stefan ;
Lynen, Simon ;
Bosse, Michael ;
Siegwart, Roland ;
Furgale, Paul .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2015, 34 (03) :314-334
[10]   R3LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package [J].
Lin, Jiarong ;
Zhang, Fu .
2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, :10672-10678