SR-LIVO: LiDAR-Inertial-Visual Odometry and Mapping With Sweep Reconstruction

被引:2
作者
Yuan, Zikang [1 ]
Deng, Jie [2 ]
Ming, Ruiye [2 ]
Lang, Fengtian [2 ]
Yang, Xin [2 ]
机构
[1] Huazhong Univ Sci & Technol, Inst Artificial Intelligence, Wuhan 430074, Peoples R China
[2] Huazhong Univ Sci & Technol, Elect Informat & Commun, Wuhan 430074, Peoples R China
基金
中国国家自然科学基金;
关键词
Laser radar; Image reconstruction; Cameras; Image color analysis; Visualization; State estimation; Rendering (computer graphics); SLAM; localization; sensor fusion;
D O I
10.1109/LRA.2024.3389415
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Existing LiDAR-inertial-visual odometry and mapping (LIV-OAM) systems mainly utilize the LiDAR-inertial odometry (LIO) module for structure reconstruction and the LiDAR-assisted visual-inertial odometry (VIO) module for color rendering. However, the performance of existing LiDAR-assisted VIO module doesn't match the accuracy delivered by LIO systems in the scenarios containing rich textures and geometric structures (i.e., without failure mode for both camera and LiDAR). This letter introduces SR-LIVO, an advanced and novel LIV-OAM system employing sweep reconstruction to align reconstructed sweeps with image timestamps. This allows the LIO module to accurately determine states at all imaging moments, enhancing pose accuracy and processing efficiency. Experimental results on two public datasets demonstrate that: 1) our SR-LIVO outperforms the existing state-of-the-art LIV-OAM systems in both pose accuracy, rendering performance and runtime efficiency; 2) In scenarios with rich textures and geometric structures, the LIO framework can provide more accurate pose than existing LiDAR-assisted VIO framework, and thus helps rendering. We have released our source code to contribute to the community development in this field.
引用
收藏
页码:5110 / 5117
页数:8
相关论文
共 21 条
  • [1] CT-ICP: Real-time Elastic LiDAR Odometry with Loop Closure
    Dellenbach, Pierre
    Deschaud, Jean-Emmanuel
    Jacquet, Bastien
    Goulette, Francois
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2022), 2022, : 5580 - 5586
  • [2] Flying on point clouds: Online trajectory generation and autonomous navigation for quadrotors in cluttered environments
    Gao, Fei
    Wu, William
    Gao, Wenliang
    Shen, Shaojie
    [J]. JOURNAL OF FIELD ROBOTICS, 2019, 36 (04) : 710 - 733
  • [3] Avoiding Dynamic Small Obstacles With Onboard Sensing and Computation on Aerial Robots
    Kong, Fanze
    Xu, Wei
    Cai, Yixi
    Zhang, Fu
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (04): : 7869 - 7876
  • [4] Levinson J, 2011, IEEE INT VEH SYM, P163, DOI 10.1109/IVS.2011.5940562
  • [5] R3LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package
    Lin, Jiarong
    Zhang, Fu
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, : 10672 - 10678
  • [6] R 2 LIVE: A Robust, Real-Time, LiDAR-Inertial-Visual Tightly-Coupled State Estimator and Mapping
    Lin, Jiarong
    Zheng, Chunran
    Xu, Wei
    Zhang, Fu
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (04) : 7469 - 7476
  • [7] Mildenhall B, 2022, COMMUN ACM, V65, P99, DOI 10.1145/3503250
  • [8] NTU VIRAL: A visual-inertial-ranging-lidar dataset, from an aerial vehicle viewpoint
    Nguyen, Thien-Minh
    Yuan, Shenghai
    Cao, Muqing
    Lyu, Yang
    Nguyen, Thien H.
    Xie, Lihua
    [J]. INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2022, 41 (03) : 270 - 280
  • [9] VILO SLAM: Tightly Coupled Binocular Vision-Inertia SLAM Combined with LiDAR
    Peng, Gang
    Zhou, Yicheng
    Hu, Lu
    Xiao, Li
    Sun, Zhigang
    Wu, Zhangang
    Zhu, Xukang
    [J]. SENSORS, 2023, 23 (10)
  • [10] Qin T, 2018, IEEE INT C INT ROBOT, P3662, DOI 10.1109/IROS.2018.8593603