Toronto-3D: A Large-scale Mobile LiDAR Dataset for Semantic Segmentation of Urban Roadways

被引:209
作者
Tan, Weikai [1 ]
Qin, Nannan [1 ,2 ]
Ma, Lingfei [1 ]
Li, Ying [1 ]
Du, Jing [3 ]
Cai, Guorong [3 ]
Yang, Ke [4 ]
Li, Jonathan [1 ,4 ]
机构
[1] Univ Waterloo, Dept Geog & Environm Management, Waterloo, ON N2L 3G1, Canada
[2] Chinese Acad Sci, Purple Mt Observ, Key Lab Planetary Sci, Nanjing 210033, JS, Peoples R China
[3] Jimei Univ, Coll Comp Engn, Xiamen 361021, FJ, Peoples R China
[4] Univ Waterloo, Dept Syst Design Engn, Waterloo, ON N2L 3G1, Canada
来源
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020) | 2020年
关键词
D O I
10.1109/CVPRW50498.2020.00109
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Semantic segmentation of large-scale outdoor point clouds is essential for urban scene understanding in various applications, especially autonomous driving and urban high-definition (HD) mapping. With rapid developments of mobile laser scanning (MLS) systems, massive point clouds are available for scene understanding, but publicly accessible large-scale labeled datasets, which are essential for developing learning-based methods, are still limited. This paper introduces Toronto-3D, a large-scale urban outdoor point cloud dataset acquired by a MLS system in Toronto, Canada for semantic segmentation. This dataset covers approximately 1 km of point clouds and consists of about 78.3 million points with 8 labeled object classes. Baseline experiments for semantic segmentation were conducted and the results confirmed the capability of this dataset to train deep learning models effectively. Toronto-3D is released 1 to encourage new research, and the labels will be improved and updated with feedback from the research community.
引用
收藏
页码:797 / 806
页数:10
相关论文
共 35 条
[1]  
[Anonymous], 2014, Revue Francaise de Photogrammetrie et de Teledetection, DOI DOI 10.52638/RFPT.2012.63
[2]  
[Anonymous], 2017, 3dor@ eurographics
[3]   SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences [J].
Behley, Jens ;
Garbade, Martin ;
Milioto, Andres ;
Quenzel, Jan ;
Behnke, Sven ;
Stachniss, Cyrill ;
Gall, Juergen .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :9296-9306
[4]   Autonomous driving in urban environments: approaches, lessons and challenges [J].
Campbell, Mark ;
Egerstedt, Magnus ;
How, Jonathan P. ;
Murray, Richard M. .
PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES, 2010, 368 (1928) :4649-4672
[5]  
Chen XZ, 2015, ADV NEUR IN, V28
[6]   Review of Automatic Feature Extraction from High-Resolution Optical Sensor Data for UAV-Based Cadastral Mapping [J].
Crommelinck, Sophie ;
Bennett, Rohan ;
Gerke, Markus ;
Nex, Francesco ;
Yang, Michael Ying ;
Vosselman, George .
REMOTE SENSING, 2016, 8 (08)
[7]   Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis [J].
Dai, Angela ;
Qi, Charles Ruizhongtai ;
Niessner, Matthias .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :6545-6554
[8]   3D Point Cloud Registration for Localization using a Deep Neural Network Auto-Encoder [J].
Elbaz, Gil ;
Avraham, Tamar ;
Fischer, Anath .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2472-2481
[9]  
Geiger A, 2012, PROC CVPR IEEE, P3354, DOI 10.1109/CVPR.2012.6248074
[10]  
Hu Q., 2019, ARXIV191111236