An improved SLAM based on RK-VIF: Vision and inertial information fusion via Runge-Kutta method

被引:8
作者
Cui, Jia-shan [1 ]
Zhang, Fang-rui [1 ]
Feng, Dong-zhu [1 ,5 ]
Li, Cong [2 ]
Li, Fei [3 ]
Tian, Qi-chen [4 ]
机构
[1] Xidian Univ, Sch Aerosp Sci & Technol, Xian 710126, Peoples R China
[2] China Acad Space Technol, Xian Branch, Xian 710100, Peoples R China
[3] China Acad Launch Vehicle Technol, Beijing 100076, Peoples R China
[4] Chinese Acad Sci, Aerosp Informat Res Inst, Jinan 250100, Peoples R China
[5] Xidian Univ, Xian, Peoples R China
来源
DEFENCE TECHNOLOGY | 2023年 / 21卷
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
SLAM; Visual-inertial positioning; Sensor fusion; Unmanned system; Runge-Kutta; SIMULTANEOUS LOCALIZATION; PREINTEGRATION; ROBUST;
D O I
10.1016/j.dt.2021.10.009
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Simultaneous Localization and Mapping (SLAM) is the foundation of autonomous navigation for un-manned systems. The existing SLAM solutions are mainly divided into the visual SLAM(vSLAM) equipped with camera and the lidar SLAM equipped with lidar. However, pure visual SLAM have shortcomings such as low positioning accuracy, the paper proposes a visual-inertial information fusion SLAM based on Runge-Kutta improved pre-integration. First, the Inertial Measurement Unit (IMU) information between two adjacent keyframes is pre-integrated at the front-end to provide IMU constraints for visual-inertial information fusion. In particular, to improve the accuracy in pre-integration, the paper uses the Runge-Kutta algorithm instead of Euler integral to calculate the pre-integration value at the next moment. Then, the IMU pre-integration value is used as the initial value of the system state at the current frame time. We combine the visual reprojection error and imu pre-integration error to optimize the state variables such as speed and pose, and restore map points' three-dimensional coordinates. Finally, we set a sliding window to optimize map points' coordinates and state variables. The experimental part is divided into dataset experiment and complex indoor-environment experiment. The results show that compared with pure visual SLAM and the existing visual-inertial fusion SLAM, our method has higher positioning accuracy.(c) 2023 China Ordnance Society. Publishing services by Elsevier B.V. on behalf of KeAi Communications Co. Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/ licenses/by-nc-nd/4.0/).
引用
收藏
页码:133 / 146
页数:14
相关论文
共 27 条
[1]  
Barfoot Timothy D, 2017, STATE ESTIMATION ROB
[2]   Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age [J].
Cadena, Cesar ;
Carlone, Luca ;
Carrillo, Henry ;
Latif, Yasir ;
Scaramuzza, Davide ;
Neira, Jose ;
Reid, Ian ;
Leonard, John J. .
IEEE TRANSACTIONS ON ROBOTICS, 2016, 32 (06) :1309-1332
[3]   Research on simultaneous localization and mapping for AUV by an improved method: Variance reduction FastSLAM with simulated annealing [J].
Cui, Jiashan ;
Feng, Dongzhu ;
Li, Yunhui ;
Tian, Qichen .
DEFENCE TECHNOLOGY, 2020, 16 (03) :651-661
[4]   Closed-form preintegration methods for graph-based visual-inertial navigation [J].
Eckenhoff, Kevin ;
Geneva, Patrick ;
Huang, Guoquan .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2019, 38 (05) :563-586
[5]   On-Manifold Preintegration for Real-Time Visual-Inertial Odometry [J].
Forster, Christian ;
Carlone, Luca ;
Dellaert, Frank ;
Scaramuzza, Davide .
IEEE TRANSACTIONS ON ROBOTICS, 2017, 33 (01) :1-21
[6]  
Forster C, 2015, ROBOTICS: SCIENCE AND SYSTEMS XI
[7]   Visual simultaneous localization and mapping: a survey [J].
Fuentes-Pacheco, Jorge ;
Ruiz-Ascencio, Jose ;
Manuel Rendon-Mancha, Juan .
ARTIFICIAL INTELLIGENCE REVIEW, 2015, 43 (01) :55-81
[8]   EKF-Based Visual Inertial Navigation Using Sliding Window Nonlinear Optimization [J].
He, Sejong ;
Cha, Jaehyuck ;
Park, Chan Gook .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2019, 20 (07) :2470-2479
[9]  
Huang GQ, 2019, IEEE INT CONF ROBOT, P9572, DOI [10.1109/icra.2019.8793604, 10.1109/ICRA.2019.8793604]
[10]  
Kok M, 2017, FOUND TRENDS SIGNAL, V11, P1, DOI 10.1561/2000000094