Obstacle detection based on depth fusion of lidar and radar in challenging conditions

被引:13
作者
Xie, Guotao [1 ]
Zhang, Jing [1 ]
Tang, Junfeng [1 ]
Zhao, Hongfei [2 ]
Sun, Ning [1 ]
Hu, Manjiang [3 ]
机构
[1] Hunan Univ, Changsha, Peoples R China
[2] 31605 Troops, Nanjing, Peoples R China
[3] Hunan Univ, Coll Mech & Vehicle Engn, Changsha, Peoples R China
来源
INDUSTRIAL ROBOT-THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH AND APPLICATION | 2021年 / 48卷 / 06期
关键词
Obstacle detection; Challenging conditions; Depth fusion;
D O I
10.1108/IR-12-2020-0271
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Purpose To the industrial application of intelligent and connected vehicles (ICVs), the robustness and accuracy of environmental perception are critical in challenging conditions. However, the accuracy of perception is closely related to the performance of sensors configured on the vehicle. To enhance sensors' performance further to improve the accuracy of environmental perception, this paper aims to introduce an obstacle detection method based on the depth fusion of lidar and radar in challenging conditions, which could reduce the false rate resulting from sensors' misdetection. Design/methodology/approach Firstly, a multi-layer self-calibration method is proposed based on the spatial and temporal relationships. Next, a depth fusion model is proposed to improve the performance of obstacle detection in challenging conditions. Finally, the study tests are carried out in challenging conditions, including straight unstructured road, unstructured road with rough surface and unstructured road with heavy dust or mist. Findings The experimental tests in challenging conditions demonstrate that the depth fusion model, comparing with the use of a single sensor, can filter out the false alarm of radar and point clouds of dust or mist received by lidar. So, the accuracy of objects detection is also improved under challenging conditions. Originality/value A multi-layer self-calibration method is conducive to improve the accuracy of the calibration and reduce the workload of manual calibration. Next, a depth fusion model based on lidar and radar can effectively get high precision by way of filtering out the false alarm of radar and point clouds of dust or mist received by lidar, which could improve ICVs' performance in challenging conditions.
引用
收藏
页码:792 / 802
页数:11
相关论文
共 18 条
[1]  
Aeberhard M, 2011, IEEE INT VEH SYM, P770, DOI 10.1109/IVS.2011.5940430
[2]  
Bijelic M., 2019, Seeing Through Fog Without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather
[3]  
Cui Y., 2020, ARXIV PREPRINT ARXIV
[4]   Obstacle Detection and Tracking for the Urban Challenge [J].
Darms, Michael S. ;
Rybski, Paul E. ;
Baker, Christopher ;
Urmson, Chris .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2009, 10 (03) :475-485
[5]   Automotive radar system for multiple-vehicle detection and tracking in urban environments [J].
Eltrass, Ahmed ;
Khalil, Mohammed .
IET INTELLIGENT TRANSPORT SYSTEMS, 2018, 12 (08) :783-792
[6]   Object Classification Using CNN-Based Fusion of Vision and LIDAR in Autonomous Vehicle Environment [J].
Gao, Hongbo ;
Cheng, Bo ;
Wang, Jianqiang ;
Li, Keqiang ;
Zhao, Jianhui ;
Li, Deyi .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2018, 14 (09) :4224-4231
[7]  
Gohring D., 2011, 5 INT C AUT ROB APPL, DOI DOI 10.1109/ICARA.2011.6144918
[8]   Predicting the Influence of Rain on LIDAR in ADAS [J].
Goodin, Christopher ;
Carruth, Daniel ;
Doude, Matthew ;
Hudson, Christopher .
ELECTRONICS, 2019, 8 (01)
[9]  
Hajri H., 2018, Int. J. Mech. Mechatron. Eng.
[10]  
Hasirlioglu S, 2016, 2016 IEEE 19TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), P2242, DOI 10.1109/ITSC.2016.7795918