Automatic Labeling to Generate Training Data for Online LiDAR-Based Moving Object Segmentation

被引:53
作者
Chen, Xieyuanli [1 ]
Mersch, Benedikt [1 ]
Nunes, Lucas [1 ]
Marcuzzi, Rodrigo [1 ]
Vizzo, Ignacio [1 ]
Behley, Jens [1 ]
Stachniss, Cyrill [1 ,2 ]
机构
[1] Univ Bonn, D-53115 Bonn, Germany
[2] Univ Oxford, Dept Engn Sci, Oxford OX1 2JD, England
关键词
Deep learning methods; object detection; segmentation and categorization; semantic scene understanding;
D O I
10.1109/LRA.2022.3166544
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Understanding the scene is key for autonomously navigating vehicles, and the ability to segment the surroundings online into moving and non-moving objects is a central ingredient of this task. Often, deep learning-based methods are used to perform moving object segmentation (MOS). The performance of these networks, however, strongly depends on the diversity and amount of labeled training data-information that may be costly to obtain. In this letter, we propose an automatic data labeling pipeline for 3D LiDAR data to save the extensive manual labeling effort and to improve the performance of existing learning-based MOS systems by automatically annotation training data. Our proposed approach achieves this by processing the data offline in batches, i.e., it is not designed to run online on a vehicle. It labels the actually moving objects such as driving cars and pedestrians as moving. In contrast, the non-moving objects, e.g., parked cars, lamps, roads, or buildings, are labeled as static. We show that this approach allows us to label LiDAR data highly effectively and compare our results to those of other label generation methods. We also train a deep neural network with our automatically generated labels and achieve comparable performance to the one trained with manual labels on the same data-and an even better performance when using additional datasets with labels generated by our approach. Furthermore, we evaluate our method on multiple datasets using different sensors, and our experiments indicate that our method can generate labels in different outdoor environments.
引用
收藏
页码:6107 / 6114
页数:8
相关论文
共 38 条
[1]  
Arora M., 2021, 2021 EUROPEAN C MOBI, P1
[2]  
Baur Stefan Andreas, 2021, P IEEECVF INT C COMP, P13126
[3]   Towards 3D LiDAR-based semantic scene understanding of 3D point cloud sequences: The SemanticKITTI Dataset [J].
Behley, Jens ;
Garbade, Martin ;
Milioto, Andres ;
Quenzel, Jan ;
Behnke, Sven ;
Gall, Juergen ;
Stachniss, Cyrill .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2021, 40 (8-9) :959-967
[4]  
Behley J, 2018, ROBOTICS: SCIENCE AND SYSTEMS XIV
[5]   SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences [J].
Behley, Jens ;
Garbade, Martin ;
Milioto, Andres ;
Quenzel, Jan ;
Behnke, Sven ;
Stachniss, Cyrill ;
Gall, Juergen .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :9296-9306
[6]  
Behley J, 2013, IEEE INT C INT ROBOT, P4195, DOI 10.1109/IROS.2013.6696957
[7]  
Bogoslavskyi I, 2016, 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), P163, DOI 10.1109/IROS.2016.7759050
[8]   nuScenes: A multimodal dataset for autonomous driving [J].
Caesar, Holger ;
Bankiti, Varun ;
Lang, Alex H. ;
Vora, Sourabh ;
Liong, Venice Erin ;
Xu, Qiang ;
Krishnan, Anush ;
Pan, Yu ;
Baldan, Giancarlo ;
Beijbom, Oscar .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :11618-11628
[9]  
Campello Ricardo J. G. B., 2013, Advances in Knowledge Discovery and Data Mining. 17th Pacific-Asia Conference (PAKDD 2013). Proceedings, P160, DOI 10.1007/978-3-642-37456-2_14
[10]   Hierarchical Density Estimates for Data Clustering, Visualization, and Outlier Detection [J].
Campello, Ricardo J. G. B. ;
Moulavi, Davoud ;
Zimek, Arthur ;
Sander, Joerg .
ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2015, 10 (01)