DcTr: Noise-robust point cloud completion by dual-channel transformer with cross-attention

被引:20
|
作者
Fei, Ben [1 ]
Yang, Weidong [1 ,2 ]
Ma, Lipeng [1 ]
Chen, Wen-Ming [3 ]
机构
[1] Fudan Univ, Sch Comp Sci, Shanghai Key Lab Data Sci, Shanghai 200433, Peoples R China
[2] Zhuhai Fudan Innovat Inst, Hengqin New Area, Zhuhai 519000, Guangdong, Peoples R China
[3] Acad Engn & Technol, Shanghai 200433, Peoples R China
基金
中国国家自然科学基金;
关键词
Point cloud; 3D Vision; Transformer; Cross; -attention; Dual -channel transformer;
D O I
10.1016/j.patcog.2022.109051
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Current point cloud completion research mainly utilizes the global shape representation and local features to recover the missing regions of 3D shape for the partial point cloud. However, these methods suffer from inefficient utilization of local features and unstructured points prediction in local patches, hardly resulting in a well-arranged structure for points. To tackle these problems, we propose to employ Dual-channel Transformer and Cross-attention (CA) for point cloud completion (DcTr). The DcTr is apt at using local features and preserving a well-structured generation process. Specifically, the dual-channel transformer leverages point-wise attention and channel-wise attention to summarize the deconvolution patterns used in the previous Dual-channel Transformer Point Deconvolution (DCTPD) stage to produce the deconvolution in the current DCTPD stage. Meanwhile, we employ cross-attention to convey the geometric information from the local regions of incomplete point clouds for the generation of complete ones at different resolutions. In this way, we can generate the locally compact and structured point cloud by capturing the structure characteristic of 3D shape in local patches. Our experimental results indicate that DcTr outperforms the state-of-the-art point cloud completion methods under several benchmarks and is robust to various kinds of noise.
引用
收藏
页数:13
相关论文
共 15 条
  • [1] VQ-DcTr: Vector Quantized Autoencoder With Dual-channel Transformer Points Splitting for 3D Point Cloud Completion
    Fei, Ben
    Yang, Weidong
    Chen, Wen-Ming
    Ma, Lipeng
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 4769 - 4778
  • [2] Learning Cross-Attention Point Transformer With Global Porous Sampling
    Duan, Yueqi
    Sun, Haowen
    Yan, Juncheng
    Lu, Jiwen
    Zhou, Jie
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 6283 - 6297
  • [3] TSMCF: Transformer-Based SAR and Multispectral Cross-Attention Fusion for Cloud Removal
    Zhu, Hongming
    Wang, Zeju
    Han, Letong
    Xu, Manxin
    Li, Weiqi
    Liu, Qin
    Liu, Sicong
    Du, Bowen
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2025, 18 : 6710 - 6720
  • [4] LCASAFormer: Cross-attention enhanced backbone network for 3D point cloud tasks
    Guo, Shuai
    Cai, Jinyin
    Hu, Yazhou
    Liu, Qidong
    Xu, Mingliang
    PATTERN RECOGNITION, 2025, 162
  • [5] 3CROSSNet: Cross-Level Cross-Scale Cross-Attention Network for Point Cloud Representation
    Han, Xian-Feng
    He, Zhang-Yue
    Chen, Jia
    Xiao, Guo-Qiang
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (02) : 3718 - 3725
  • [6] Transformer-Based Dual-Channel Self-Attention for UUV Autonomous Collision Avoidance
    Lin, Changjian
    Cheng, Yuhu
    Wang, Xuesong
    Yuan, Jianya
    Wang, Guoqing
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2023, 8 (03): : 2319 - 2331
  • [7] GazeSymCAT: A symmetric cross-attention transformer for robust gaze estimation under extreme head poses and gaze variations
    Zhong, Yupeng
    Lee, Sang Hun
    JOURNAL OF COMPUTATIONAL DESIGN AND ENGINEERING, 2025, 12 (03) : 115 - 129
  • [8] DTSSD: Dual-Channel Transformer-Based Network for Point-Based 3D Object Detection
    Zheng, Zhijie
    Huang, Zhicong
    Zhao, Jingwen
    Hu, Haifeng
    Chen, Dihu
    IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 798 - 802
  • [9] OAAFormer: Robust and Efficient Point Cloud Registration Through Overlapping-Aware Attention in Transformer
    Gao, Jun-Jie
    Dong, Qiu-Jie
    Wang, Rui-An
    Chen, Shuang-Min
    Xin, Shi-Qing
    Tu, Chang-He
    Wang, Wenping
    Journal of Computer Science and Technology, 2024, 39 (04) : 755 - 770
  • [10] ADT: Person re-identification based on efficient attention mechanism and single-channel dual-channel fusion with transformer features aggregation
    Xing, Jiahui
    Lu, Jian
    Zhang, Kaibing
    Chen, Xiaogai
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 261