YOdar: Uncertainty-based Sensor Fusion for Vehicle Detection with Camera and Radar Sensors

被引:6
作者
Kowol, Kamil [1 ]
Rottmann, Matthias [1 ]
Bracke, Stefan [2 ]
Gottschalk, Hanno [1 ]
机构
[1] Univ Wuppertal, Sch Math & Nat Sci, Gaussstr 20, Wuppertal, Germany
[2] Univ Wuppertal, Chair Reliabil Engn & Risk Analyt, Gaussstr 20, Wuppertal, Germany
来源
ICAART: PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 2 | 2021年
关键词
Uncertainty in AI; Machine Learning; Sensor Fusion; Vehicle Detection at Night;
D O I
10.5220/0010239301770186
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work, we present an uncertainty-based method for sensor fusion with camera and radar data. The outputs of two neural networks, one processing camera and the other one radar data, are combined in an uncertainty aware manner. To this end, we gather the outputs and corresponding meta information for both networks. For each predicted object, the gathered information is post-processed by a gradient boosting method to produce a joint prediction of both networks. In our experiments we combine the YOLOv3 object detection network with a customized 1D radar segmentation network and evaluate our method on the nuScenes dataset. In particular we focus on night scenes, where the capability of object detection networks based on camera data is potentially handicapped. Our experiments show, that this approach of uncertainty aware fusion, which is also of very modular nature, significantly gains performance compared to single sensor baselines and is in range of specifically tailored deep learning based fusion approaches.
引用
收藏
页码:177 / 186
页数:10
相关论文
共 33 条
[1]  
[Anonymous], 2015, TensorFlow: Large-scale machine learning on heterogeneous systems
[2]  
[Anonymous], 2018, 2018 IEEE 87 VEH TEC
[3]   Car Detection using Unmanned Aerial Vehicles: Comparison between Faster R-CNN and YOLOv3 [J].
Benjdira, Bilel ;
Khursheed, Taha ;
Koubaa, Anis ;
Ammar, Adel ;
Ouni, Kais .
2019 1ST INTERNATIONAL CONFERENCE ON UNMANNED VEHICLE SYSTEMS-OMAN (UVS), 2019,
[4]   nuScenes: A multimodal dataset for autonomous driving [J].
Caesar, Holger ;
Bankiti, Varun ;
Lang, Alex H. ;
Vora, Sourabh ;
Liong, Venice Erin ;
Xu, Qiang ;
Krishnan, Anush ;
Pan, Yu ;
Baldan, Giancarlo ;
Beijbom, Oscar .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :11618-11628
[5]   Controlled False Negative Reduction of Minority Classes in Semantic Segmentation [J].
Chan, Robin ;
Rottmann, Matthias ;
Hueger, Fabian ;
Schlicht, Peter ;
Gottschalk, Hanno .
2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
[6]  
Chollet Francois, 2015, Keras: Deep Learning Library for Theano and Tensorflow
[7]   Robust Fusion of LiDAR and Wide-Angle Camera Data for Autonomous Mobile Robots [J].
De Silva, Varuna ;
Roche, Jamie ;
Kondoz, Ahmet .
SENSORS, 2018, 18 (08)
[8]  
Dosovitskiy A, 2017, PR MACH LEARN RES, V78
[9]   Stochastic gradient boosting [J].
Friedman, JH .
COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2002, 38 (04) :367-378
[10]   Radar and LiDAR Sensorfusion in Low Visibility Environments [J].
Fritsche, Paul ;
Kueppers, Simon ;
Briese, Gunnar ;
Wagner, Bernardo .
ICINCO: PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS, VOL 2, 2016, :30-36