Automated Extrinsic Calibration of Multi-Cameras and LiDAR

被引:0
作者
Zhang, Xinyu [1 ,2 ]
Xiong, Yijin [1 ,3 ]
Qu, Qianxin [4 ]
Zhu, Shifan [4 ]
Guo, Shichun [4 ]
Jin, Dafeng [4 ]
Zhang, Guoying [3 ,5 ]
Ren, Haibing [5 ]
Li, Jun [2 ]
机构
[1] Tsinghua Univ, State Key Lab Intelligent Green Vehicleand Mobil, Beijing 100084, Peoples R China
[2] Beihang Univ, Sch Transportat Sci & Engn, Beijing 100191, Peoples R China
[3] China Univ Min & Technol Beijing, Sch Artificial Intelligence, Beijing 100083, Peoples R China
[4] Tsinghua Univ, State Key Lab Intelligent Green Vehicle & Mobil, Beijing 100084, Peoples R China
[5] Autonomous Delivery Grp Meituan, Beijing 100102, Peoples R China
基金
国家高技术研究发展计划(863计划);
关键词
Autonomous driving; extrinsic calibration; global optimization; multiple sensors system; SELF-CALIBRATION; VEHICLE; VISION;
D O I
10.1109/TIM.2023.3341122
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In intelligent driving systems, the multisensor fusion perception system comprising multiple cameras and LiDAR has become a crucial component. It is essential to have stable extrinsic parameters among devices in a multisensor fusion system to achieve all-weather sensing with no blind zones. However, prolonged vehicle usage can result in immeasurable sensor offsets that lead to perception deviations. To this end, we have studied multisensor unified calibration, rather than the calibration between a single pair of sensors as previously done. Benefiting from the mutually constrained pose between different sensor pairs, the method improves calibration accuracy by around 20% compared to calibration for a pair of sensors. The study can serve as a foundation for multisensor unified calibration, enabling the overall automatic optimization of all camera and LiDAR sensors onboard a vehicle within a single framework.
引用
收藏
页码:1 / 12
页数:12
相关论文
共 48 条
[1]  
Bileschi Stanley, 2009, 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, P1457, DOI 10.1109/ICCVW.2009.5457439
[2]   YOLOv4-5D: An Effective and Efficient Object Detector for Autonomous Driving [J].
Cai, Yingfeng ;
Luan, Tianyu ;
Gao, Hongbo ;
Wang, Hai ;
Chen, Long ;
Li, Yicheng ;
Sotelo, Miguel Angel ;
Li, Zhixiong .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2021, 70
[3]  
Castorena J, 2016, INT CONF ACOUST SPEE, P2862, DOI 10.1109/ICASSP.2016.7472200
[4]   Deep Neural Network Based Vehicle and Pedestrian Detection for Autonomous Driving: A Survey [J].
Chen, Long ;
Lin, Shaobo ;
Lu, Xiankai ;
Cao, Dongpu ;
Wu, Hangbin ;
Guo, Chi ;
Liu, Chun ;
Wang, Fei-Yue .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 22 (06) :3234-3246
[5]  
Debattisti S, 2013, IEEE INT VEH SYM, P696, DOI 10.1109/IVS.2013.6629548
[6]  
Deng L., 2020, SAETech. Paper 2020-01-0098
[7]   Single-Shot is Enough: Panoramic Infrastructure Based Calibration of Multiple Cameras and 3D LiDARs [J].
Fang, Chuan ;
Ding, Shuai ;
Dong, Zilong ;
Li, Honghua ;
Zhu, Siyu ;
Tan, Ping .
2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, :8890-8897
[8]   Infra2Go: A Mobile Development Platform for Connected, Cooperative and Autonomous Driving [J].
Fleck, Tobias ;
Jauernig, Lennart ;
Polley, Rupert ;
Schorner, Philip ;
Zofka, Marc Rene ;
Zoellner, J. Marius .
2022 IEEE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2022, :1629-1636
[9]   Circular Targets for 3D Alignment of Video and Lidar Sensors [J].
Fremont, Vincent ;
Rodriguez F, Sergio A. ;
Bonnifait, Philippe .
ADVANCED ROBOTICS, 2012, 26 (18) :2087-2113
[10]   Vision meets robotics: The KITTI dataset [J].
Geiger, A. ;
Lenz, P. ;
Stiller, C. ;
Urtasun, R. .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2013, 32 (11) :1231-1237