Learning-Based Sampling Method for Point Cloud Segmentation

被引:1
作者
An, Yi [1 ]
Wang, Jian [1 ]
He, Lijun [1 ]
Li, Fan [1 ]
机构
[1] Xi An Jiao Tong Univ, Sch Informat & Commun Engn, Shaanxi Key Lab Deep Space Explorat Intelligent In, Xian 710049, Peoples R China
关键词
Point cloud compression; Task analysis; Sampling methods; Three-dimensional displays; Laser radar; Accuracy; Bidirectional control; Learning-based sampling method; point cloud; segmentation; SEMANTIC SEGMENTATION;
D O I
10.1109/JSEN.2024.3410373
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Light detection and ranging (LiDAR) has become one of the most important sensors in 3-D perception. With the advancement of sensor technology, the point cloud data generated by LiDAR have also become increasingly large. There are difficulties in processing them directly, such as hardware limitations, computational costs, storage constraints, algorithmic considerations, and other factors. One of the most common solutions is to sample the point clouds. Learning-based downsampling methods have been proven to be effective in point cloud classification, registration, and reconstruction. However, the integration of downsampling techniques with segmentation tasks remains inadequately investigated. This is mainly because, for segmentation tasks, to achieve higher segmentation accuracy, the sampled points need more detailed information and complete structural information. This will greatly increase the difficulty of sampling. This article proposes a learning-based sampling method for point cloud segmentation task. Our research analyzes the spatial relationships within point clouds using a simplification network to generate sampled points. A bidirectional chamfer distance (CD) is used to ensure that the original and sampled points have similar structural characteristics. The experimental results demonstrate that our network, SampleSegNet, outperforms alternative sampling methods.
引用
收藏
页码:24140 / 24151
页数:12
相关论文
共 57 条
[1]   Estimation of Spatial Features in 3-D-Sensor Network Using Multiple LiDARs for Indoor Monitoring [J].
Azuma, Kenta ;
Akiyama, Kuon ;
Shinkuma, Ryoichi ;
Trovato, Gabriele ;
Nihei, Koichi ;
Iwai, Takanori .
IEEE SENSORS JOURNAL, 2023, 23 (07) :7850-7867
[2]   SnapNet: 3D point cloud semantic labeling with 2D deep segmentation networks [J].
Boulch, Alexandre ;
Guerry, Yids ;
Le Saux, Bertrand ;
Audebert, Nicolas .
COMPUTERS & GRAPHICS-UK, 2018, 71 :189-198
[3]   Learning to Sample [J].
Dovrat, Oren ;
Lang, Itai ;
Avidan, Shai .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :2755-2764
[4]   Point Transformer [J].
Engel, Nico ;
Belagiannis, Vasileios ;
Dietmayer, Klaus .
IEEE ACCESS, 2021, 9 :134826-134840
[5]   3D Semantic Segmentation with Submanifold Sparse Convolutional Networks [J].
Graham, Benjamin ;
Engelcke, Martin ;
van der Maaten, Laurens .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :9224-9232
[6]   Flex-Convolution Million-Scale Point-Cloud Learning Beyond Grid-Worlds [J].
Groh, Fabian ;
Wieschollek, Patrick ;
Lensch, Hendrik P. A. .
COMPUTER VISION - ACCV 2018, PT I, 2019, 11361 :105-122
[7]   PCT: Point cloud transformer [J].
Guo, Meng-Hao ;
Cai, Jun-Xiong ;
Liu, Zheng-Ning ;
Mu, Tai-Jiang ;
Martin, Ralph R. ;
Hu, Shi-Min .
COMPUTATIONAL VISUAL MEDIA, 2021, 7 (02) :187-199
[8]  
He Y, 2024, Arxiv, DOI arXiv:2103.05423
[9]  
Hermosilla P, 2018, ACM T GRAPHIC, V37, DOI 10.1145/3272127.3275110
[10]  
Huang J, 2016, INT C PATT RECOG, P2670, DOI 10.1109/ICPR.2016.7900038