Accurate LiDAR-Camera Fused Odometry and RGB-Colored Mapping

被引:8
作者
Lin, Zhipeng [1 ,2 ]
Gao, Zhi [3 ,4 ]
Chen, Ben M. [1 ]
Chen, Jingwei [3 ]
Li, Chenyang [3 ]
机构
[1] Chinese Univ Hong Kong, Dept Mech & Automat Engn, Hong Kong 999077, Peoples R China
[2] Peng Cheng Lab, Dept Math & Theories, Shenzhen 518000, Peoples R China
[3] Wuhan Univ, Sch Remote Sensing Informat Engn, Wuhan 430072, Peoples R China
[4] Wuhan Univ, Hubei Luojia Lab, Wuhan 430079, Peoples R China
关键词
Laser radar; Simultaneous localization and mapping; Image color analysis; Odometry; Cameras; Point cloud compression; Feature extraction; Localization; mapping; sensor fusion; simultaneous localization and mapping (SLAM);
D O I
10.1109/LRA.2024.3356982
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Due to the complementary properties between sensors, multi-sensor fusion can effectively promote accuracy and tackle the challenging scenes in simultaneous localization and mapping (SLAM) tasks. To this end, we propose a novel LiDAR-camera fused method for odometry and mapping using dense colored point clouds. With the camera well calibrated to the LiDAR, we can acquire colored point clouds, providing constraints of color and geometric features for SLAM tasks. Our LiDAR-camera fused odometry and mapping system leverages the geometric features from the point cloud and color information from the camera. The main innovation is projecting the colored point to the local point cloud plane and formulating an RGB color objective function for SLAM tasks. We optimize the geometric and color objective functions jointly to estimate the precise pose of the robot. In particular, we maintain a color feature map and a planar feature map separately in the optimization process, which reduces the algorithm's computation significantly. The evaluation experiments are performed on a UGV platform and a handheld platform. We demonstrate the effectiveness of our LiDAR-camera fusion method using the solid-state LiDAR and camera on an Intel RealSense L515 sensor. The results show that our method effectively promotes localization accuracy, works well in challenging environments, and outperforms existing methods. We will share the code publicly to benefit the community (after the review stage).
引用
收藏
页码:2495 / 2502
页数:8
相关论文
共 29 条
[1]   ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial, and Multimap SLAM [J].
Campos, Carlos ;
Elvira, Richard ;
Gomez Rodriguez, Juan J. ;
Montiel, Jose M. M. ;
Tardos, Juan D. .
IEEE TRANSACTIONS ON ROBOTICS, 2021, 37 (06) :1874-1890
[2]  
Dellaert F., 2017, Factor graphs for robot perception, V6, P1, DOI [DOI 10.1561/2300000043, 10.1561/2300000043]
[3]   Multispectral Visual Odometry Using SVSF for Mobile Robot Localization [J].
Fahima, Benyounes ;
Abdelkrim, Nemra .
UNMANNED SYSTEMS, 2022, 10 (03) :273-288
[4]   Synergizing Low Rank Representation and Deep Learning for Automatic Pavement Crack Detection [J].
Gao, Zhi ;
Zhao, Xuhui ;
Cao, Min ;
Li, Ziyao ;
Liu, Kangcheng ;
Chen, Ben M. .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (10) :10676-10690
[5]  
Graeter J, 2018, IEEE INT C INT ROBOT, P7872, DOI 10.1109/IROS.2018.8594394
[6]   Lvio-Fusion: A Self-adaptive Multi-sensor Fusion SLAM Framework Using Actor-critic Method [J].
Jia, Yupeng ;
Luo, Haiyong ;
Zhao, Fang ;
Jiang, Guanlin ;
Li, Yuhang ;
Yan, Jiaquan ;
Jiang, Zhuqing ;
Wang, Zitian .
2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, :286-293
[7]  
Lin JR, 2022, Arxiv, DOI arXiv:2209.03666
[8]   R3LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package [J].
Lin, Jiarong ;
Zhang, Fu .
2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, :10672-10678
[9]  
Olson E, 2011, IEEE INT CONF ROBOT
[10]   Colored Point Cloud Registration Revisited [J].
Park, Jaesik ;
Zhou, Qian-Yi ;
Koltun, Vladlen .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :143-152