M3LVI: a multi-feature, multi-metric, multi-loop, LiDAR-visual-inertial odometry via smoothing and mapping

被引:1
作者
Hu, Jiaxiang [1 ]
Shi, Xiaojun [1 ]
Ma, Chunyun [1 ]
Yao, Xin [1 ]
Wang, Yingxin [1 ]
机构
[1] Xi An Jiao Tong Univ, Shaanxi Key Lab Intelligent Robots, Xian, Peoples R China
来源
INDUSTRIAL ROBOT-THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH AND APPLICATION | 2023年 / 50卷 / 03期
基金
国家重点研发计划;
关键词
Sensor fusion; State estimation; Simultaneous localization and mapping; LiDAR-visual-inertial system; REAL-TIME; ROBUST;
D O I
10.1108/IR-05-2022-0143
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
PurposeThe purpose of this paper is to propose a multi-feature, multi-metric and multi-loop tightly coupled LiDAR-visual-inertial odometry, (MLVI)-L-3, for high-accuracy and robust state estimation and mapping. Design/methodology/approachM(3)LVI is built atop a factor graph and composed of two subsystems, a LiDAR-inertial system (LIS) and a visual-inertial system (VIS). LIS implements multi-feature extraction on point cloud, and then multi-metric transformation estimation is implemented to realize LiDAR odometry. LiDAR-enhanced images and IMU pre-integration have been used in VIS to realize visual odometry, providing a reliable initial guess for LIS matching module. Location recognition is performed by a dual loop module combined with Bag of Words and LiDAR-Iris to correct accumulated drift. (MLVI)-L-3 also functions properly when one of the subsystems failed, which greatly increases the robustness in degraded environments. FindingsQuantitative experiments were conducted on the KITTI data set and the campus data set to evaluate the (MLVI)-L-3. The experimental results show the algorithm has higher pose estimation accuracy than existing methods. Practical implicationsThe proposed method can greatly improve the positioning and mapping accuracy of AGV, and has an important impact on AGV material distribution, which is one of the most important applications of industrial robots. Originality/valueM(3)LVI divides the original point cloud into six types, and uses multi-metric transformation estimation to estimate the state of robot and adopts factor graph optimization model to optimize the state estimation, which improves the accuracy of pose estimation. When one subsystem fails, the other system can complete the positioning work independently, which greatly increases the robustness in degraded environments.
引用
收藏
页码:483 / 495
页数:13
相关论文
共 26 条
[1]   BRIEF: Binary Robust Independent Elementary Features [J].
Calonder, Michael ;
Lepetit, Vincent ;
Strecha, Christoph ;
Fua, Pascal .
COMPUTER VISION-ECCV 2010, PT IV, 2010, 6314 :778-792
[2]   Efficient and Accurate Tightly-Coupled Visual-Lidar SLAM [J].
Chou, Chih-Chung ;
Chou, Cheng-Fu .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (09) :14509-14523
[3]  
Dellaert F., 2017, Factor graphs for robot perception, V6, P1, DOI [0.1561/2300000043, DOI 10.1561/2300000043]
[4]  
Dhall A, 2017, Arxiv, DOI arXiv:1705.09785
[5]   Bags of Binary Words for Fast Place Recognition in Image Sequences [J].
Galvez-Lopez, Dorian ;
Tardos, Juan D. .
IEEE TRANSACTIONS ON ROBOTICS, 2012, 28 (05) :1188-1197
[6]  
Geiger A, 2012, PROC CVPR IEEE, P3354, DOI 10.1109/CVPR.2012.6248074
[7]  
Graeter J, 2018, IEEE INT C INT ROBOT, P7872, DOI 10.1109/IROS.2018.8594394
[8]   iSAM2: Incremental smoothing and mapping using the Bayes tree [J].
Kaess, Michael ;
Johannsson, Hordur ;
Roberts, Richard ;
Ila, Viorela ;
Leonard, John J. ;
Dellaert, Frank .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2012, 31 (02) :216-235
[9]   R 2 LIVE: A Robust, Real-Time, LiDAR-Inertial-Visual Tightly-Coupled State Estimator and Mapping [J].
Lin, Jiarong ;
Zheng, Chunran ;
Xu, Wei ;
Zhang, Fu .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (04) :7469-7476
[10]  
Lucas BD., 1981, P DARPA IMAGE UNDERS, DOI DOI 10.1042/CS0730285