Cyclic Self-attention for Point Cloud Recognition

被引:5
|
作者
Zhu, Guanyu [1 ]
Zhou, Yong [1 ]
Yao, Rui [1 ]
Zhu, Hancheng [1 ]
Zhao, Jiaqi [2 ]
机构
[1] China Univ Min & Technol, Engn Res Ctr Mine Digitizat, Sch Comp Sci & Technol, Minist Educ Peoples Republ China, 1 Daxue Rd, Xuzhou, Jiangsu, Peoples R China
[2] China Univ Min & Technol, Innovat Res Ctr Disaster Intelligent Prevent & Em, Minist Educ Peoples Republ China, Sch Comp Sci & Technol Engn,Res Ctr Mine Digitiza, 1 Daxue Rd, Xuzhou, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
Point cloud; self-attention; cyclic pairing; adaptive fuse; NETWORKS;
D O I
10.1145/3538648
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Point clouds provide a flexible geometric representation for computer vision research. However, the harsh demands for the number of input points and computer hardware are still significant challenges, which hinder their deployment in real applications. To address these challenges, we design a simple and effective module named cyclic self-attention module (CSAM). Specifically, three attention maps of the same input are obtained by cyclically pairing the feature maps, thus exploring the features sufficiently of the attention space of the original input. CSAM can adequately explore the correlation between points to obtain sufficient feature information despite the multiplicative decrease in inputs. Meanwhile, it can direct the computational power to the more essential features, relieving the burden on the computer hardware. We build a point cloud classification network by simply stacking CSAM called cyclic self-attention network (CSAN). We also propose a novel framework for point cloud semantic segmentation called full cyclic self-attention network (FCSAN). By adaptively fusing the original mapping features and the CSAM extracted features, it can better capture the context information of point clouds. Extensive experiments on several benchmark datasets show that our methods can achieve competitive performance in classification and segmentation tasks.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Cross self-attention network for 3D point cloud
    Wang, Gaihua
    Zhai, Qianyu
    Liu, Hong
    KNOWLEDGE-BASED SYSTEMS, 2022, 247
  • [2] Adaptive offset self-attention network for 3D point cloud
    Wang, Gaihua
    Wang, Nengyuan
    Li, Qi
    Liu, Hong
    Zhai, Qianyu
    JOURNAL OF ELECTRONIC IMAGING, 2023, 32 (02)
  • [3] Separable Self-Attention Mechanism for Point Cloud Local and Global Feature Modeling
    Wang, Fan
    Wang, Xiaoli
    Lv, Dan
    Zhou, Lumei
    Shi, Gang
    IEEE ACCESS, 2022, 10 : 129823 - 129831
  • [4] PointSwin: Modeling Self-Attention with Shifted Window on Point Cloud
    Jiang, Cheng
    Peng, Yuanxi
    Tang, Xuebin
    Li, Chunchao
    Li, Teng
    APPLIED SCIENCES-BASEL, 2022, 12 (24):
  • [5] Point cloud classification network based on self-attention mechanism
    Li, Yujie
    Cai, Jintong
    COMPUTERS & ELECTRICAL ENGINEERING, 2022, 104
  • [6] Scene flow estimation from point cloud based on grouped relative self-attention
    Xiang, Xuezhi
    Zhou, Xiankun
    Wei, Yingxin
    Wang, Xi
    Qiao, Yulong
    IMAGE AND VISION COMPUTING, 2025, 154
  • [7] PCGFormer: Lossy Point Cloud Geometry Compression via Local Self-Attention
    Liu, Gexin
    Wang, Jianqiang
    Ding, Dandan
    Ma, Zhan
    2022 IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 2022,
  • [8] DAPnet: A Double Self-Attention Convolutional Network for Point Cloud Semantic Labeling
    Chen, Li
    Chen, Weiye
    Xu, Zewei
    Huang, Haozhe
    Wang, Shaowen
    Zhu, Qing
    Li, Haifeng
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2021, 14 : 9680 - 9691
  • [9] High-Resolution Self-attention with Fair Loss for Point Cloud Segmentation
    Liu, Qiyuan
    Lu, Jinzheng
    Li, Qiang
    Huang, Bingsen
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT V, 2024, 14451 : 344 - 356
  • [10] Self-attention for Speech Emotion Recognition
    Tarantino, Lorenzo
    Garner, Philip N.
    Lazaridis, Alexandros
    INTERSPEECH 2019, 2019, : 2578 - 2582