Cooperative Perception for 3D Object Detection in Driving Scenarios Using Infrastructure Sensors

被引:164
作者
Arnold, Eduardo [1 ]
Dianati, Mehrdad [1 ]
de Temple, Robert [2 ]
Fallah, Saber [3 ]
机构
[1] Univ Warwick, Warwick Mfg Grp, Coventry CV4 7AL, W Midlands, England
[2] Jaguar Land Rover Ltd, Coventry CV4 7HS, W Midlands, England
[3] Univ Surrey, Connected & Autonomous Vehicles Lab CAV Lab, Guildford GU2 7XH, Surrey, England
基金
英国工程与自然科学研究理事会;
关键词
Three-dimensional displays; Object detection; Sensor fusion; Sensor systems; Autonomous vehicles; Fuses; cooperative perception; autonomous vehicles; ADAS; deep learning;
D O I
10.1109/TITS.2020.3028424
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
3D object detection is a common function within the perception system of an autonomous vehicle and outputs a list of 3D bounding boxes around objects of interest. Various 3D object detection methods have relied on fusion of different sensor modalities to overcome limitations of individual sensors. However, occlusion, limited field-of-view and low-point density of the sensor data cannot be reliably and cost-effectively addressed by multi-modal sensing from a single point of view. Alternatively, cooperative perception incorporates information from spatially diverse sensors distributed around the environment as a way to mitigate these limitations. This article proposes two schemes for cooperative 3D object detection using single modality sensors. The early fusion scheme combines point clouds from multiple spatially diverse sensing points of view before detection. In contrast, the late fusion scheme fuses the independently detected bounding boxes from multiple spatially diverse sensors. We evaluate the performance of both schemes, and their hybrid combination, using a synthetic cooperative dataset created in two complex driving scenarios, a T-junction and a roundabout. The evaluation shows that the early fusion approach outperforms late fusion by a significant margin at the cost of higher communication bandwidth. The results demonstrate that cooperative perception can recall more than 95% of the objects as opposed to 30% for single-point sensing in the most challenging scenario. To provide practical insights into the deployment of such system, we report how the number of sensors and their configuration impact the detection performance of the system.
引用
收藏
页码:1852 / 1864
页数:13
相关论文
共 39 条
[1]  
[Anonymous], 2010, International journal of computer vision, DOI DOI 10.1007/s11263-009-0275-4
[2]  
[Anonymous], 2004, MULTIPLE VIEW GEOMET, DOI DOI 10.1017/CBO9780511811685
[3]  
Arnold E, 2019, IEEE INT VEH SYM, P2484, DOI 10.1109/IVS.2019.8813811
[4]   A Survey on 3D Object Detection Methods for Autonomous Driving Applications [J].
Arnold, Eduardo ;
Al-Jarrah, Omar Y. ;
Dianati, Mehrdad ;
Fallah, Saber ;
Oxtoby, David ;
Mouzakitis, Alex .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2019, 20 (10) :3782-3795
[5]  
Beltrán J, 2018, IEEE INT C INTELL TR, P3517, DOI 10.1109/ITSC.2018.8569311
[6]   A Review of Data Fusion Techniques [J].
Castanedo, Federico .
SCIENTIFIC WORLD JOURNAL, 2013,
[7]   F-Cooper: Feature based Cooperative Perception for Autonomous Vehicle Edge Computing System Using 3D Point Clouds [J].
Chen, Qi ;
Ma, Xu ;
Tang, Sihai ;
Guo, Jingda ;
Yang, Qing ;
Fu, Song .
SEC'19: PROCEEDINGS OF THE 4TH ACM/IEEE SYMPOSIUM ON EDGE COMPUTING, 2019, :88-100
[8]   Cooper: Cooperative Perception for Connected Autonomous Vehicles based on 3D Point Clouds [J].
Chen, Qi ;
Tang, Sihai ;
Yang, Qing ;
Fu, Song .
2019 39TH IEEE INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2019), 2019, :514-524
[9]   Multi-View 3D Object Detection Network for Autonomous Driving [J].
Chen, Xiaozhi ;
Ma, Huimin ;
Wan, Ji ;
Li, Bo ;
Xia, Tian .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :6526-6534
[10]  
Collins T., 2014, Tech. Rep.