Online Calibration Method of Lidar-Visual Sensor External Parameters Based on 4D Correlation Pyramid

被引:4
作者
Liu, Hui [1 ]
Meng, Liwen [1 ]
Duan, Yijian [1 ]
Wu, Danfeng [2 ]
Huang, Boru [1 ]
Wu, Jiachun [1 ]
Meng, Yanmei [1 ]
机构
[1] Guangxi Univ, Sch Mech Engn, Nanning 530004, Guangxi, Peoples R China
[2] Beijing Union Univ, Coll Robot, Beijing 100020, Peoples R China
来源
CHINESE JOURNAL OF LASERS-ZHONGGUO JIGUANG | 2024年 / 51卷 / 17期
关键词
measurement; lidar; visual sensor; online calibration; deep learning; intensity information; CAMERA;
D O I
10.3788/CJL231290
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
Objective In the fields of robotic vision, 3D scene reconstruction, autonomous driving of unmanned vehicles and virtual reality, lidar and vision sensor fusion systems have become critical technologies, providing powerful perception capabilities for various application scenarios. In these systems, it is crucial to achieve high quality data fusion that requires accurate calibration of the external parameters of lidar and vision sensors. However, there are several problems with existing calibration methods. For example, target- based methods require additional preparation and are limited to offline use, target- free methods have low generalization capability, and learning- based methods ignore detailed information. Aiming at the above problems, the purpose of this research is to propose a new, end- to- end learnable online calibration method of lidar-visual sensor external parameters, which is based on the four-dimensional (4D) correlation pyramid. This method not only eliminates the need for manual intervention, but also is applicable to different initial error ranges with strong generalization capabilities, enabling the real-time estimation of six degrees of freedom (DOFs) lidar-vision sensor external parameters. Methods Firstly, this method realizes texture information perception of sparse point clouds by introducing intensity information into feature extraction, thereby improving the ability to accurately capture object texture features and structural information. Secondly, by constructing a 4D correlation pyramid and adaptively merging features of different scales, it effectively handles the problem of large initial error values and loss of detailed information, improves the robustness of the algorithm, and makes it suitable for different initial error ranges. The design of the loss function takes into account the geometric structure information of the point cloud, achieves decoupling from the internal parameters of the visual sensor, and improves the generalization performance. In addition, the method of iterative refinement of multi- error- range networks is introduced to effectively improve the calibration accuracy by training multi- error- range networks. Results and Discussions The proposed online lidar-visual sensor calibration method is significantly innovative and practical. By verifying different initial error ranges in KITTI Visual Odometry (KITTI VO), the single model calibration network achieves significant calibration error reductions in translation and rotation. The translation and rotation errors are 0.339 cm and 0.026 degrees , respectively (Table 1), reduced by 6.09 degrees o and 13.33 degrees o, respectively, in comparison with those obtained by LCCNet network. On the other hand, compared with the existing learning methods, the proposed algorithm reduces the rotation error by 20.51 degrees o and 32.56 degrees o, respectively (Tables 3 and 4). The generalization experiment results show that the algorithm achieves satisfactory calibration results in each error range of the KITTI360 dataset (Table 5), and has strong generalization performance. Even under different untrained scenes and acquisition equipment conditions (Table 6), excellent calibration models can also be obtained by the proposed algorithm. The visualization results vividly demonstrate the excellent calibration performance of the algorithm within different error ranges (Fig. 7). The ability of network to gradually optimize the calibration results in multiple iterations is verified by error distribution diagram (Fig. 9) and the trend with the number of iterations. Finally, the accurate external parameters between the lidar and the visual sensor are successfully obtained by the online calibration network proposed in this paper, and a highly accurate three-dimensional (3D) color map (Fig. 10) is generated, demonstrating the excellent performance of the algorithm. Conclusions In summary, this paper proposes an online calibration method for lidar-visual sensors based on the 4D correlation pyramid. By introducing intensity information and 4D correlation feature pyramid modules, the problems in the calibration process including large initial error values, low texture features and poor generalization performance are effectively solved. It is shown that the proposed algorithm is superior to traditional methods in all aspects of performance, especially the calibration performance over a range of different initial errors. This algorithm has the characteristics of real time, high generalization capability and low network complexity. It provides a reliable and effective calibration solution to the application of lidar and visual sensor fusion systems, which has broad practical application prospects.
引用
收藏
页码:1 / 12
页数:12
相关论文
共 24 条
[1]   Three-Dimensional Multi-Object Tracking Based on Feature Fusion and Similarity Estimation Network [J].
Chen Wenming ;
Hong Ru ;
Gai Shaoyan ;
Da Feipeng .
ACTA OPTICA SINICA, 2022, 42 (16)
[2]  
Geiger A, 2012, PROC CVPR IEEE, P3354, DOI 10.1109/CVPR.2012.6248074
[3]  
Iyer G, 2018, IEEE INT C INT ROBOT, P1110, DOI 10.1109/IROS.2018.8593693
[4]   PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization [J].
Kendall, Alex ;
Grimes, Matthew ;
Cipolla, Roberto .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :2938-2946
[5]  
Kümmerle J, 2020, IEEE INT CONF ROBOT, P6028, DOI [10.1109/icra40945.2020.9197496, 10.1109/ICRA40945.2020.9197496]
[6]   Three- Dimensional Object Detection Technology Based on Point Cloud Data [J].
Li Jianan ;
Wang Ze ;
Xu Tingfa .
ACTA OPTICA SINICA, 2023, 43 (15)
[7]   Targetless Extrinsic Calibration of Multiple Small FoV LiDARs and Cameras Using Adaptive Voxelization [J].
Liu, Xiyuan ;
Yuan, Chongjian ;
Zhang, Fu .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
[8]   CFNet: LiDAR-Camera Registration Using Calibration Flow Network [J].
Lv, Xudong ;
Wang, Shuo ;
Ye, Dong .
SENSORS, 2021, 21 (23)
[9]   LCCNet: LiDAR and Camera Self-Calibration using Cost Volume Network [J].
Lv, Xudong ;
Wang, Boya ;
Dou, Ziwen ;
Ye, Dong ;
Wang, Shuo .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, :2888-2895
[10]   CalibBD: Extrinsic Calibration of the LiDAR and Camera Using a Bidirectional Neural Network [J].
Nguyen, An Duy ;
Yoo, Myungsik .
IEEE ACCESS, 2022, 10 :121261-121271