Transfer Learning in urban object classification: Online images to recognize point clouds

被引:24
作者
Balado, Jesus [1 ]
Sousa, Ricardo [2 ]
Diaz-Vilarino, Lucia [3 ]
Arias, Pedro [1 ]
机构
[1] Univ Vigo, Appl Geotechnol Res Grp, Dept Nat Resources & Environm Engn, Sch Min & Energy Engn, Campus Lagoas Marcosende, Vigo 36310, Spain
[2] Univ Porto, LIAAD, INESC TEC, Campus Fac Engn, P-4200465 Porto, Portugal
[3] Univ Vigo, Sch Ind Engn, Dept Design Engn, Appl Geotechnol Res Grp, Campus Lagoas Marcosende, Vigo 36310, Spain
基金
欧盟地平线“2020”;
关键词
LiDAR; CNN; Inception; 3D data processing; Mobile laser scanning; Data fusion; TRAFFIC SIGN DETECTION; SEMANTIC SEGMENTATION; EXTRACTION; FUSION;
D O I
10.1016/j.autcon.2019.103058
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
The application of Deep Learning techniques to point clouds for urban object classification is limited by the large number of samples needed. Acquiring and tagging point clouds is more expensive and tedious labour than its image equivalent process. Point cloud online datasets contain few samples for Deep Learning or not always the desired classes This work focuses on minimizing the use of point cloud samples for neural network training in urban object classification. The method proposed is based on the conversion of point clouds to images (pc-images) because it enables: the use of Convolutional Neural Networks, the generation of several samples (images) per object (point clouds) by means of multi-view, and the combination of pc-images with images from online datasets (ImageNet and Google Images). The study is conducted with ten classes of objects extracted from two street point clouds from two different cities. The network selected for the job is the InceptionV3. The training set consists of 5000 online images with a variable percentage (0% to 10%) of pc-images. The validation and testing sets are composed exclusively of pc-images. Although the network trained only with online images reached 47% accuracy, the inclusion of a small percentage of pc-images in the training set improves the classification to 99.5% accuracy with 6% pc-images. The network is also applied at IQmulus & TerraMobilita Contest dataset and it allows the correct classification of elements with few samples.
引用
收藏
页数:11
相关论文
共 91 条
[51]   Recognizing basic structures from mobile laser scanning data for road inventory studies [J].
Pu, Shi ;
Rutzinger, Martin ;
Vosselman, George ;
Elberink, Sander Oude .
ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2011, 66 (06) :S28-S39
[52]   Review of mobile mapping and surveying technologies [J].
Puente, I. ;
Gonzalez-Jorge, H. ;
Martinez-Sanchez, J. ;
Arias, P. .
MEASUREMENT, 2013, 46 (07) :2127-2145
[53]  
Qi CR, 2017, ADV NEUR IN, V30
[54]   PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation [J].
Qi, Charles R. ;
Su, Hao ;
Mo, Kaichun ;
Guibas, Leonidas J. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :77-85
[55]   Volumetric and Multi-View CNNs for Object Classification on 3D Data [J].
Qi, Charles R. ;
Su, Hao ;
Niessner, Matthias ;
Dai, Angela ;
Yan, Mengyuan ;
Guibas, Leonidas J. .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :5648-5656
[56]   Deep fusion of multi-view and multimodal representation of ALS point cloud for 3D terrain scene recognition [J].
Qin, Nannan ;
Hu, Xiangyun ;
Dai, Hengming .
ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2018, 143 :205-212
[57]   Automatic Detection and Classification of Pole-Like Objects in Urban Point Cloud Data Using an Anomaly Detection Algorithm [J].
Rodriguez-Cuenca, Borja ;
Garcia-Cortes, Silverio ;
Ordonez, Celestino ;
Alonso, Maria C. .
REMOTE SENSING, 2015, 7 (10) :12680-12703
[58]   A Network Architecture for Point Cloud Classification via Automatic Depth Images Generation [J].
Roveri, Riccardo ;
Rahmann, Lukas ;
Oztireli, A. Cengiz ;
Gross, Markus .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :4176-4184
[59]   FAST AND ROBUST SEGMENTATION AND CLASSIFICATION FOR CHANGE DETECTION IN URBAN POINT CLOUDS [J].
Roynard, X. ;
Deschaud, J-E. ;
Goulette, F. .
XXIII ISPRS CONGRESS, COMMISSION III, 2016, 41 (B3) :693-699
[60]  
Roynard Xavier, 2018, ARXIV180403583