A LiDAR Point Cloud Generator: from a Virtual World to Autonomous Driving

被引:152
作者
Yue, Xiangyu [1 ]
Wu, Bichen [1 ]
Seshia, Sanjit A. [1 ]
Keutzer, Kurt [1 ]
Sangiovanni-Vincentelli, Alberto L. [1 ]
机构
[1] Univ Calif Berkeley, EECS, Berkeley, CA 94720 USA
来源
ICMR '18: PROCEEDINGS OF THE 2018 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL | 2018年
基金
美国国家科学基金会;
关键词
LiDAR Point Cloud; Simulation Environment; Autonomous Driving; Neural Network Analysis; Neural Network Retraining; OBJECT RETRIEVAL;
D O I
10.1145/3206025.3206080
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
3D LiDAR scanners are playing an increasingly important role in autonomous driving as they can generate depth information of the environment. However, creating large 3D LiDAR point cloud datasets with point-level labels requires a significant amount of manual annotation. This jeopardizes the efficient development of supervised deep learning algorithms which are often data-hungry. We present a framework to rapidly create point clouds with accurate point-level labels from a computer game. To our best knowledge, this is the first publication on LiDAR point cloud simulation framework for autonomous driving. The framework supports data collection from both auto-driving scenes and user-configured scenes. Point clouds from auto-driving scenes can be used as training data for deep learning algorithms, while point clouds from user-configured scenes can be used to systematically test the vulnerability of a neural network, and use the falsifying examples to make the neural network more robust through retraining. In addition, the scene images can be captured simultaneously in order for sensor fusion tasks, with a method proposed to do automatic registration between the point clouds and captured scene images. We show a significant improvement in accuracy (+9%) in point cloud segmentation by augmenting the training dataset with the generated synthesized data. Our experiments also show by testing and retraining the network using point clouds from user-configured scenes, the weakness/blind spots of the neural network can be fixed.
引用
收藏
页码:458 / 464
页数:7
相关论文
共 28 条
[1]  
[Anonymous], ARXIV14035718
[2]  
[Anonymous], 2017, ARXIV170608355
[3]  
[Anonymous], 2017, ARXIV170803309
[4]  
[Anonymous], 2010, Computer Vision and Pattern Recognition Work- shops (CVPRW), 2010 IEEE Computer Society Conference on
[5]  
Biedermann D, 2016, IEEE INT VEH SYM, P986, DOI 10.1109/IVS.2016.7535508
[6]  
BOYKO A, 2014, P 27 ANN ACM S US IN, P33, DOI DOI 10.1145/2642918.2647418
[7]   Learning Hierarchical Semantic Segmentations of LIDAR Data [J].
Dohan, David ;
Matejek, Brian ;
Funkhouser, Thomas .
2015 INTERNATIONAL CONFERENCE ON 3D VISION, 2015, :273-281
[8]  
Dosovitskiy A., 2017, P 1 ANN C ROB LEARN, P1, DOI DOI 10.48550/ARXIV.1711.03938
[9]   Compositional Falsification of Cyber-Physical Systems with Machine Learning Components [J].
Dreossi, Tommaso ;
Donze, Alexandre ;
Seshia, Sanjit A. .
NASA FORMAL METHODS (NFM 2017), 2017, 10227 :357-372
[10]  
Fernandes R, 2014, IEEE VEHICLE POWER