Subdivision of Adjacent Areas for 3D Point Cloud Semantic Segmentation

被引:0
|
作者
Xu, Haixia [1 ]
Hu, Kaiyu [1 ]
Xu, Yuting [1 ]
Zhu, Jiang [1 ]
机构
[1] Xiangtan Univ, Sch Automat & Elect Informat, Key Lab Intelligent Comp & Informat Proc, Minist Educ, Xiangtan 411100, Peoples R China
关键词
Semantic segmentation; 3D point cloud; Global attention; Deep learning; EXTRACTION; NETWORK;
D O I
10.1007/s11760-024-03728-7
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In 3D point cloud semantic segmentation, much of the previous research has focused on aggregating the fine-grained geometric structures of local regions, overlooking the long-term features. However, global long-term contextual dependencies play a role as important as local features aggregation. This paper proposes a Subdivision of Adjacent Areas (SAA) module, which efficiently mines more informative features to enrich global long-term contextual dependencies. SAA is constructed by the CoVariance-Enhanced Channel Attention (CECA) and the PseudoNL Spatial Attention (PSA). The former learns the interdependence among channels via second-order statistics for each feature channel, while the latter efficiently captures the positional correlation among points in the entire space via a pseudo feature map. The proposed SAA, a plug-and-play, end-to-end trainable module, can be integrated into existing segmentation networks. Extensive experiments on S3DIS and ScanNet datasets demonstrate that networks integrated with our SAA improve mIoU performance. It verifies that SAA is beneficial for 3D point cloud segmentation networks in achieving excellent performance.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] A PRE-TRAINING METHOD FOR 3D BUILDING POINT CLOUD SEMANTIC SEGMENTATION
    Cao, Yuwei
    Scaioni, Marco
    XXIV ISPRS CONGRESS IMAGING TODAY, FORESEEING TOMORROW, COMMISSION II, 2022, 5-2 : 219 - 226
  • [42] Context-Aware 3D Point Cloud Semantic Segmentation With Plane Guidance
    Weng, Tingyu
    Xiao, Jun
    Yan, Feilong
    Jiang, Haiyong
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 6653 - 6664
  • [43] 3D Point Cloud Semantic Segmentation Network Based on Coding Feature Learning
    Tong, Guofeng
    Liu, Yongxu
    Peng, Hao
    Shao, Yuyuan
    Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, 2023, 36 (04): : 313 - 326
  • [44] Semantic Segmentation of 3D Point Cloud to Virtually Manipulate Real Living Space
    Ishikawa, Yuki
    Hachiuma, Ryo
    Ienaga, Naoto
    Kuno, Wakaba
    Sugiura, Yuta
    Saito, Hideo
    2019 12TH ASIA PACIFIC WORKSHOP ON MIXED AND AUGMENTED REALITY (APMAR), 2019, : 63 - 69
  • [45] Temporal Feature Matching and Propagation for Semantic Segmentation on 3D Point Cloud Sequences
    Shi, Hanyu
    Li, Ruibo
    Liu, Fayao
    Lin, Guosheng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (12) : 7491 - 7502
  • [46] 3D Recurrent Neural Networks with Context Fusion for Point Cloud Semantic Segmentation
    Ye, Xiaoqing
    Li, Jiamao
    Huang, Hexiao
    Du, Liang
    Zhang, Xiaolin
    COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 : 415 - 430
  • [47] 3D point cloud semantic segmentation based on visual guidance and feature enhancement3D point cloud semantic segmentation...S. Chen et al.
    Sitong Chen
    Yucheng Shu
    Lihong Qiao
    Zhengyang Wu
    Jing Ling
    Jiang Wu
    Weisheng Li
    Multimedia Systems, 2025, 31 (3)
  • [48] SnapNet: 3D point cloud semantic labeling with 2D deep segmentation networks
    Boulch, Alexandre
    Guerry, Yids
    Le Saux, Bertrand
    Audebert, Nicolas
    COMPUTERS & GRAPHICS-UK, 2018, 71 : 189 - 198
  • [49] SEGCloud: Semantic Segmentation of 3D Point Clouds
    Tchapmi, Lyne P.
    Choy, Christopher B.
    Armeni, Iro
    Gwak, JunYoung
    Savarese, Silvio
    PROCEEDINGS 2017 INTERNATIONAL CONFERENCE ON 3D VISION (3DV), 2017, : 537 - 547
  • [50] Exploring Semantic Information Extraction From Different Data Forms in 3D Point Cloud Semantic Segmentation
    Zhang, Ansi
    Li, Song
    Wu, Jie
    Li, Shaobo
    Zhang, Bao
    IEEE ACCESS, 2023, 11 : 61929 - 61949