Deep Active Learning for Efficient Training of a LiDAR 3D Object Detector

被引:0
作者
Feng, Di [1 ,2 ]
Wei, Xiao [1 ,3 ]
Rosenbaum, Lars [1 ]
Maki, Atsuto [3 ]
Dietmayer, Klaus [2 ]
机构
[1] Robert Bosch GmbH, Driver Assistance Syst & Automated Driving, Corp Res, D-71272 Renningen, Germany
[2] Ulm Univ, Inst Measurement Control & Microtechnol, D-89081 Ulm, Germany
[3] KTH Royal Inst Technol, Sch Elect Engn & Comp Sci, S-10044 Stockholm, Sweden
来源
2019 30TH IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV19) | 2019年
关键词
Deep neural network; active learning; uncertainty estimation; object detection; autonomous driving;
D O I
10.1109/ivs.2019.8814236
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Training a deep object detector for autonomous driving requires a huge amount of labeled data. While recording data via on-board sensors such as camera or LiDAR is relatively easy, annotating data is very tedious and time-consuming, especially when dealing with 3D LiDAR points or radar data. Active learning has the potential to minimize human annotation efforts while maximizing the object detector's performance. In this work, we propose an active learning method to train a LiDAR 3D object detector with the least amount of labeled training data necessary. The detector leverages 2D region proposals generated from the RGB images to reduce the search space of objects and speed up the learning process. Experiments show that our proposed method works under different uncertainty estimations and query functions, and can save up to 60% of the labeling efforts while reaching the same network performance.
引用
收藏
页码:667 / 674
页数:8
相关论文
共 43 条
[1]  
[Anonymous], 2018, IEEE C COMP VIS PATT
[2]  
[Anonymous], ROBOTICS SCI SYSTEMS
[3]  
[Anonymous], 2017, INT C INT ROB SYST
[4]  
[Anonymous], 2018, EUR C COMP VIS ECCV
[5]  
[Anonymous], 2018 IEEE 21 INT C I
[6]  
[Anonymous], 2018, ARXIV180905590
[7]  
[Anonymous], 2017, Frustum pointnets for 3d object detection from rgb-d data
[8]  
[Anonymous], 2016, P IEEE C COMP VIS PA
[9]  
[Anonymous], PROC CVPR IEEE
[10]  
[Anonymous], 2017, P 34 INT C MACH LEAR, DOI DOI 10.1109/DSC.2017.89