Multi-Class Road User Detection With 3+1D Radar in the View-of-Delft Dataset

被引:135
作者
Palffy, Andras [1 ]
Pool, Ewoud [1 ]
Baratam, Srimannarayana [1 ]
Kooij, Julian F. P. [1 ]
Gavrila, Dariu M. [1 ]
机构
[1] Delft Univ Technol, Intelligent Vehicles Grp, NL-2628 CD Delft, Netherlands
关键词
Object detection; segmentation and categorization; data sets for robotic vision; automotive radars; AUTOMOTIVE RADAR; CAMERA;
D O I
10.1109/LRA.2022.3147324
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Next-generation automotive radars provide elevation data in addition to range-, azimuth- and Doppler velocity. In this experimental study, we apply a state-of-the-art object detector (PointPillars), previously used for LiDAR 3D data, to such 3+1D radar data (where 1D refers to Doppler). In ablation studies, we first explore the benefits of the additional elevation information, together with that of Doppler, radar cross section and temporal accumulation, in the context of multi-class road user detection. We subsequently compare object detection performance on the radar and LiDAR point clouds, object class-wise and as a function of distance. To facilitate our experimental study, we present the novel View-of-Delft (VoD) automotive dataset. It contains 8693 frames of synchronized and calibrated 64-layer LiDAR-, (stereo) camera-, and 3+1D radar-data acquired in complex, urban traffic. It consists of 123106 3D bounding box annotations of both moving and static objects, including 26587 pedestrian, 10800 cyclist and 26949 car labels. Our results show that object detection on 64-layer LiDAR data still outperforms that on 3+1D radar data, but the addition of elevation information and integration of successive radar scans helps close the gap. The VoD dataset is made freely available for scientific benchmarking at https://intelligent-vehicles.org/datasets/view-of-delft/.
引用
收藏
页码:4961 / 4968
页数:8
相关论文
共 41 条
[1]   Practical classification of different moving targets using automotive radar and deep neural networks [J].
Angelov, Aleksandar ;
Robertson, Andrew ;
Murray-Smith, Roderick ;
Fioranelli, Francesco .
IET RADAR SONAR AND NAVIGATION, 2018, 12 (10) :1082-1089
[2]   Radar Transformer: An Object Classification Network Based on 4D MMW Imaging Radar [J].
Bai, Jie ;
Zheng, Lianqing ;
Li, Sen ;
Tan, Bin ;
Chen, Sihan ;
Huang, Libo .
SENSORS, 2021, 21 (11)
[3]   Pointillism: Accurate 3D Bounding Box Estimation with Multi-Radars [J].
Bansal, Kshitiz ;
Rungta, Keshav ;
Zhu, Siyuan ;
Bharadia, Dinesh .
PROCEEDINGS OF THE 2020 THE 18TH ACM CONFERENCE ON EMBEDDED NETWORKED SENSOR SYSTEMS, SENSYS 2020, 2020, :340-353
[4]  
Barnes D, 2020, IEEE INT CONF ROBOT, P6433, DOI [10.1109/ICRA40945.2020.9196884, 10.1109/icra40945.2020.9196884]
[5]   nuScenes: A multimodal dataset for autonomous driving [J].
Caesar, Holger ;
Bankiti, Varun ;
Lang, Alex H. ;
Vora, Sourabh ;
Liong, Venice Erin ;
Xu, Qiang ;
Krishnan, Anush ;
Pan, Yu ;
Baldan, Giancarlo ;
Beijbom, Oscar .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :11618-11628
[6]   A Neural Network Based System for Efficient Semantic Segmentation of Radar Point Clouds [J].
Cennamo, Alessandro ;
Kaestner, Florian ;
Kummert, Anton .
NEURAL PROCESSING LETTERS, 2021, 53 (05) :3217-3235
[7]  
Danzer A, 2019, IEEE INT C INTELL TR, P61, DOI 10.1109/ITSC.2019.8917000
[8]   Radar-based Dynamic Occupancy Grid Mapping and Object Detection [J].
Diehl, Christopher ;
Feicho, Eduard ;
Schwambach, Alexander ;
Dammeier, Thomas ;
Mares, Eric ;
Bertram, Torsten .
2020 IEEE 23RD INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2020,
[9]   A Joint Extrinsic Calibration Tool for Radar, Camera and Lidar [J].
Domhof, Joris ;
Kooij, Julian F. P. ;
Gavrila, Dariu M. .
IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2021, 6 (03) :571-582
[10]   Automotive Radar Signal Processing: Research Directions and Practical Challenges [J].
Engels, Florian ;
Heidenreich, Philipp ;
Wintermantel, Markus ;
Stacker, Lukas ;
Al Kadi, Muhammed ;
Zoubir, Abdelhak M. .
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2021, 15 (04) :865-878