A Robust LiDAR-Camera Self-Calibration Via Rotation-Based Alignment and Multi-Level Cost Volume

被引:4
作者
Duan, Zaipeng [1 ,2 ]
Hu, Xuzhong [1 ,2 ]
Ding, Junfeng [1 ,2 ]
An, Pei [3 ]
Huang, Xiao [4 ]
Ma, Jie [1 ,2 ]
机构
[1] Huazhong Univ Sci & Technol HUST, Sch Artificial Intelligence & Automat, Wuhan 430074, Hubei, Peoples R China
[2] HUST, Natl Key Lab Sci & Technol Multispectral Informat, Wuhan 430074, Hubei, Peoples R China
[3] Wuhan Inst Technol, Sch Elect & Informat Engn, Wuhan 430205, Peoples R China
[4] China Ship Dev & Design Ctr, Wuhan 430064, Hubei, Peoples R China
基金
中国国家自然科学基金;
关键词
Calibration; sensor fusion; deep learning; ATTENTION;
D O I
10.1109/LRA.2023.3336250
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
collaborative perception has been a significant trend in self-driving and robot navigation. The precondition for multi-sensor fusion is the accurate calibration between sensors. Traditional LiDAR-Camera calibrations rely on laborious manual operations. Several recent studies have demonstrated the advantages of convolutional neural networks regarding feature extraction capabilities. However, the vast modality discrepancy between RGB images and point clouds makes it difficult to explore corresponding features, remaining a challenge for LiDAR-Camera calibrations. In this letter, we propose a new robust online LiDARCamera self-calibration network (SCNet). To reduce the search dimensionality for feature matching, we exploit self-supervised learning to align RGB images with projected depth images in 2D pixel coordinates, thereby achieving pre-alignment of the roll angle. In addition, to generate more accurate initial similarity measures for RGB image pixels and possible corresponding projected depth image pixels, we propose a novel multi-level patch matching method that concatenates cost volume constructed from multi-level feature maps. Our method achieves a mean absolute calibration error of 0.724 cm in translation and 0.055(degrees) in rotation in a single frame analysis with miscalibration magnitudes of up to +/- 1.5 m and +/- 20(degrees) on the KITTI odometry dataset, which demonstrates the superiority of our method.
引用
收藏
页码:627 / 634
页数:8
相关论文
共 34 条
[1]   Geometric calibration for LiDAR-camera system fusing 3D-2D and 3D-3D point correspondences [J].
An, Pei ;
Ma, Tao ;
Yu, Kun ;
Fang, Bin ;
Zhang, Jun ;
Fu, Wenxing ;
Ma, Jie .
OPTICS EXPRESS, 2020, 28 (02) :2122-2141
[2]   nuScenes: A multimodal dataset for autonomous driving [J].
Caesar, Holger ;
Bankiti, Varun ;
Lang, Alex H. ;
Vora, Sourabh ;
Liong, Venice Erin ;
Xu, Qiang ;
Krishnan, Anush ;
Pan, Yu ;
Baldan, Giancarlo ;
Beijbom, Oscar .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :11618-11628
[3]   Pyramid Stereo Matching Network [J].
Chang, Jia-Ren ;
Chen, Yong-Sheng .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :5410-5418
[4]  
Cui JH, 2020, Arxiv, DOI arXiv:2011.08516
[5]  
Dhall A, 2017, Arxiv, DOI arXiv:1705.09785
[6]   FlowNet: Learning Optical Flow with Convolutional Networks [J].
Dosovitskiy, Alexey ;
Fischer, Philipp ;
Ilg, Eddy ;
Haeusser, Philip ;
Hazirbas, Caner ;
Golkov, Vladimir ;
van der Smagt, Patrick ;
Cremers, Daniel ;
Brox, Thomas .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :2758-2766
[7]   LiDAR-Camera Calibration Under Arbitrary Configurations: Observability and Methods [J].
Fu, Bo ;
Wang, Yue ;
Ding, Xiaqing ;
Jiao, Yanmei ;
Tang, Li ;
Xiong, Rong .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2020, 69 (06) :3089-3102
[8]  
Geiger A, 2012, PROC CVPR IEEE, P3354, DOI 10.1109/CVPR.2012.6248074
[9]  
Gidaris S., 2018, P ICLR
[10]   Searching for MobileNetV3 [J].
Howard, Andrew ;
Sandler, Mark ;
Chu, Grace ;
Chen, Liang-Chieh ;
Chen, Bo ;
Tan, Mingxing ;
Wang, Weijun ;
Zhu, Yukun ;
Pang, Ruoming ;
Vasudevan, Vijay ;
Le, Quoc V. ;
Adam, Hartwig .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :1314-1324