RADIATE A Radar Dataset for Automotive Perception in Bad Weather

被引:119
作者
Sheeny, Marcel [1 ]
De Pellegrin, Emanuele [1 ]
Mukherjee, Saptarshi [1 ]
Ahrabian, Alireza [1 ]
Wang, Sen [1 ]
Wallace, Andrew [1 ]
机构
[1] Heriot Watt Univ, Inst Sensors Signals & Syst, Edinburgh, Midlothian, Scotland
来源
2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021) | 2021年
基金
英国工程与自然科学研究理事会;
关键词
ATTENUATION;
D O I
10.1109/ICRA48506.2021.9562089
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Datasets for autonomous cars are essential for the development and benchmarking of perception systems. However, mast existing datasets are captured with camera and LiDAR sensors in good weather conditions. In this paper, we present the RAdar Dataset In Adverse weaThEr (RADIATE), aiming to facilitate research on object detection, tracking and scene understanding using radar sensing for safe autonomous driving. RADIATE includes 3 hours of annotated radar images with more than 200K labelled road actors in total, on average about 4.6 instances per radar image. It covers 8 different categories of actors in a variety of weather conditions (e.g., sun, night, rain, fog and snow) and driving scenarios (e.g., parked, urban, motorway and suburban), representing different levels of challenge. To the best of our knowledge, this is the lirst public radar dataset which provides high-resolution radar images on public roads with a large amount of road actors labelled. The data collected in adverse weathers, e.g., fog and snowfall, is unique. Some baseline results of radar based object detection and recognition are given to show that the use of radar data is promising for automotive applications in bad weather, where vision and LiDAR fail. RADIATE also has stereo images, 32-channel LiDAR and GPS data, directed at other applications such as sensor fusion, localisation and mapping. The public dataset can be accessed at http://pro.hw.ac.uk/radiate/.
引用
收藏
页码:5617 / 5623
页数:7
相关论文
共 31 条
[1]  
[Anonymous], 2004, VIP 05 P PAN SYDN AR
[2]  
Barnes D., 2020, P IEEE INT C ROBOTIC
[3]  
Bijelic M, 2018, IEEE INT VEH SYM, P760
[4]  
Caesar H., 2003, ARXIV190311027
[5]  
Chang C, 2019, WILEY SER PARA DIST, P3
[6]  
Daniel L., 2017, P INT C RAD SYST BEL, P1, DOI DOI 10.1049/CP.2017.0369
[7]  
Dotiwalla X, 2019, ARXIV191204838
[8]   The Pascal Visual Object Classes (VOC) Challenge [J].
Everingham, Mark ;
Van Gool, Luc ;
Williams, Christopher K. I. ;
Winn, John ;
Zisserman, Andrew .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2010, 88 (02) :303-338
[9]   Vision meets robotics: The KITTI dataset [J].
Geiger, A. ;
Lenz, P. ;
Stiller, C. ;
Urtasun, R. .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2013, 32 (11) :1231-1237
[10]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778