Data Preparation Impact on Semantic Segmentation of 3D Mobile LiDAR Point Clouds Using Deep Neural Networks

被引:3
作者
Kouhi, Reza Mahmoudi [1 ]
Daniel, Sylvie [1 ]
Giguere, Philippe [2 ]
机构
[1] Univ Laval, Dept Geomat Sci, Quebec City, PQ G1V 0A6, Canada
[2] Univ Laval, Dept Comp Sci & Software Engn, Quebec City, PQ G1V 0A6, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
semantic segmentation; 3D point cloud; deep neural networks; LiDAR;
D O I
10.3390/rs15040982
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Currently, 3D point clouds are being used widely due to their reliability in presenting 3D objects and accurately localizing them. However, raw point clouds are unstructured and do not contain semantic information about the objects. Recently, dedicated deep neural networks have been proposed for the semantic segmentation of 3D point clouds. The focus has been put on the architecture of the network, while the performance of some networks, such as Kernel Point Convolution (KPConv), shows that the way data are presented at the input of the network is also important. Few prior works have studied the impact of using data preparation on the performance of deep neural networks. Therefore, our goal was to address this issue. We propose two novel data preparation methods that are compatible with typical density variations in outdoor 3D LiDAR point clouds. We also investigated two already existing data preparation methods to show their impact on deep neural networks. We compared the four methods with a baseline method based on point cloud partitioning in PointNet++. We experimented with two deep neural networks: PointNet++ and KPConv. The results showed that using any of the proposed data preparation methods improved the performance of both networks by a tangible margin compared to the baseline. The two proposed novel data preparation methods achieved the best results among the investigated methods for both networks. We noticed that, for datasets containing many classes with widely varying sizes, the KNN-based data preparation offered superior performance compared to the Fixed Radius (FR) method. Moreover, this research allowed identifying guidelines to select meaningful downsampling and partitioning of large-scale outdoor 3D LiDAR point clouds at the input of deep neural networks.
引用
收藏
页数:19
相关论文
共 38 条
[1]   3D Semantic Parsing of Large-Scale Indoor Spaces [J].
Armeni, Iro ;
Sener, Ozan ;
Zamir, Amir R. ;
Jiang, Helen ;
Brilakis, Ioannis ;
Fischer, Martin ;
Savarese, Silvio .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :1534-1543
[2]   SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences [J].
Behley, Jens ;
Garbade, Martin ;
Milioto, Andres ;
Quenzel, Jan ;
Behnke, Sven ;
Stachniss, Cyrill ;
Gall, Juergen .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :9296-9306
[3]  
Boulch A., 2020, P IEEE ASIAN C COMPU
[4]   SnapNet: 3D point cloud semantic labeling with 2D deep segmentation networks [J].
Boulch, Alexandre ;
Guerry, Yids ;
Le Saux, Bertrand ;
Audebert, Nicolas .
COMPUTERS & GRAPHICS-UK, 2018, 71 :189-198
[5]   4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks [J].
Choy, Christopher ;
Gwak, JunYoung ;
Savarese, Silvio .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :3070-3079
[6]  
Engelmann F., 2019, P IEEE EUROPEAN C CO
[7]   Exploring Spatial Context for 3D Semantic Segmentation of Point Clouds [J].
Engelmann, Francis ;
Kontogianni, Theodora ;
Hermans, Alexander ;
Leibe, Bastian .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2017), 2017, :716-724
[8]   A Review on Deep Learning Techniques for 3D Sensed Data Classification [J].
Griffiths, David ;
Boehm, Jan .
REMOTE SENSING, 2019, 11 (12)
[9]  
Groh F., 2018, P IEEE ASIAN C COMPU
[10]   Deep Learning for 3D Point Clouds: A Survey [J].
Guo, Yulan ;
Wang, Hanyun ;
Hu, Qingyong ;
Liu, Hao ;
Liu, Li ;
Bennamoun, Mohammed .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (12) :4338-4364