DTNLS: 3D Point Cloud Segmentation Based on 2D Image and 3D Point Cloud Double Texture Feature

被引:0
作者
Liu, Zhiguang [1 ]
Yan, Xiaoxiao [2 ]
Zhao, Jiahui [1 ]
Shi, Yong [3 ]
Yu, Fei [4 ]
Zhao, Jian [1 ]
机构
[1] Tianjin Chengjian Univ, Sch Control & Mech Engn, Tianjin 300384, Peoples R China
[2] CATARC Tianjin Automot Engn Res Inst Co Ltd, Tianjin 300339, Peoples R China
[3] Guangzhou City Construct Coll, Sch Elect & Mech Engn, Guangzhou 510900, Guangdong, Peoples R China
[4] Hebei Univ Technol, Sch Mech Engn, Tianjin 300130, Peoples R China
来源
IEEE ACCESS | 2025年 / 13卷
关键词
Point cloud compression; Three-dimensional displays; Image segmentation; Laser radar; Accuracy; Cameras; Image color analysis; Textural features; segmentation and categorization; diffusion source; 3D point cloud segmentation;
D O I
10.1109/ACCESS.2024.3446593
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Panoramic segmentation of 3D point clouds is an essential and challenging technology for robots with 3D detection and measurement capabilities. In order to fuse the color information of 2D image pixels with the spatial position information of the 3D LiDAR point cloud, it is necessary to establish the corresponding relationship between the RGB of pixels and the XYZ position of the 3D LiDAR point cloud. We present Double Texture Neighbor LiDAR Segmentation (DTNLS) in this letter. Double texture refers to the color texture of the image and the point cloud texture of the 3D LiDAR. The DTNLS method first uses the color texture feature of the image to segment the pixels and then obtains the clustering center and segmentation boundary outline. 3D LiDAR point cloud texture segmentation takes the clustering center obtained above as the diffusion source, diffuses outwards along the ring LiDAR line, finds the LiDAR point cloud texture boundary features near the image segmentation boundary contour, and finally realizes 3D point cloud segmentation. We carried out quantitative analysis experiments, and the results showed that compared with the best results among the existing mainstream segmentation methods, the proposed DTNLS method improved the accuracy of pedestrian segmentation by 32.2%. The recall improved by 20.12% in pedestrian segmentation. IoU improved by 48.8% for pedestrians. We conducted empirical studies on public datasets and our datasets to demonstrate that DTNLS has broad applicability and better performance in 3D point cloud segmentation than the previous latest techniques, without the need for any 2D images and 3D point cloud training data.
引用
收藏
页码:104047 / 104057
页数:11
相关论文
共 35 条
[1]   Nesti-Net: Normal Estimation for Unstructured 3D Point Clouds using Convolutional Neural Networks [J].
Ben-Shabat, Yizhak ;
Lindenbaum, Michael ;
Fischer, Anath .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :10104-10112
[2]   SEGMENTATION THROUGH VARIABLE-ORDER SURFACE FITTING [J].
BESL, PJ ;
JAIN, RC .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1988, 10 (02) :167-192
[3]   Architectural modeling from sparsely scanned range data [J].
Chen, Jie ;
Chen, Baoquan .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2008, 78 (2-3) :223-236
[4]  
Dhall A, 2017, Arxiv, DOI arXiv:1705.09785
[5]  
Du XX, 2018, IEEE INT CONF ROBOT, P3194
[6]   Segmentation of airborne laser scanning data using a slope adaptive neighborhood [J].
Filin, S ;
Pfeifer, N .
ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2006, 60 (02) :71-80
[7]   RANDOM SAMPLE CONSENSUS - A PARADIGM FOR MODEL-FITTING WITH APPLICATIONS TO IMAGE-ANALYSIS AND AUTOMATED CARTOGRAPHY [J].
FISCHLER, MA ;
BOLLES, RC .
COMMUNICATIONS OF THE ACM, 1981, 24 (06) :381-395
[8]  
Geiger A, 2012, PROC CVPR IEEE, P3354, DOI 10.1109/CVPR.2012.6248074
[9]   3D Semantic Segmentation with Submanifold Sparse Convolutional Networks [J].
Graham, Benjamin ;
Engelcke, Martin ;
van der Maaten, Laurens .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :9224-9232
[10]   A REVIEW OF POINT CLOUDS SEGMENTATION AND CLASSIFICATION ALGORITHMS [J].
Grilli, E. ;
Menna, F. ;
Remondino, F. .
3D VIRTUAL RECONSTRUCTION AND VISUALIZATION OF COMPLEX ARCHITECTURES, 2017, 42-2 (W3) :339-344