SAPCNet: symmetry-aware point cloud completion network

被引:1
|
作者
Xue, Yazhang [1 ]
Wang, Guoqi [1 ]
Fan, Xin [1 ]
Yu, Long [2 ]
Tian, Shengwei [1 ]
Zhang, Huang [1 ]
机构
[1] Xinjiang Univ, Coll Software, Urumqi, Peoples R China
[2] Xinjiang Univ, Network Ctr, Urumqi, Peoples R China
关键词
point cloud completion; symmetry-aware transformer; structural similarity; seed;
D O I
10.1117/1.JEI.33.5.053031
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In fields such as autonomous driving and 3D object reconstruction, complete 3D point cloud data is crucial. Existing methods often directly reconstruct complete point clouds from partial ones, overlooking the structural similarities within the point cloud data. To tackle this challenge, we introduce SAPCNet, an innovative network architecture that leverages the symmetry and structural similarities of point clouds to infer missing parts from known parts. We assume that incomplete point clouds share topological similarities with their symmetric counterparts. Through a feature-position pair extractor, we extract the center point and its features, which are then fused into an existing proxy. With our proposed symmetry-aware transformer, we analyze these features to accurately predict the positions of symmetric point proxies. In addition, we introduce a fine-seed generator to bridge the gap between the predicted missing point cloud and the original input point cloud, ensuring that the reconstructed point cloud maintains the geometric structure and visual characteristics consistent with the original data. Through a series of qualitative and quantitative evaluations, SAPCNet demonstrates outstanding performance across multiple datasets. (c) 2024 SPIE and IS&T
引用
收藏
页数:16
相关论文
共 50 条
  • [31] MRRA-GAN: Multi-Resolution Relation-Aware GAN for Point Cloud Completion
    Ren, Ke
    Du, Zhenjiang
    He, Qifeng
    Xie, Ning
    Wang, Guan
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 2045 - 2050
  • [32] CF-NET: COMPLEMENTARY FUSION NETWORK FOR ROTATION INVARIANT POINT CLOUD COMPLETION
    Chen, Bo-Fan
    Yeh, Yang-Ming
    Lu, Yi-Chang
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2275 - 2279
  • [33] AGFA-Net: Adaptive Global Feature Augmentation Network for Point Cloud Completion
    Liu, Xinpu
    Ma, Yanxin
    Xu, Ke
    Wan, Jianwei
    Guo, Yulan
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [34] Explicitly Guided Information Interaction Network for Cross-Modal Point Cloud Completion
    Xu, Hang
    Long, Chen
    Zhang, Wenxiao
    Liu, Yuan
    Gao, Zhen
    Dong, Zhen
    Yang, Bisheng
    COMPUTER VISION - ECCV 2024, PT XII, 2025, 15070 : 414 - 432
  • [35] Multi-stage refinement network for point cloud completion based on geodesic attention
    Chang, Yuchen
    Wang, Kaiping
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [36] Voting-based patch sequence autoregression network for adaptive point cloud completion☆
    Wu, Hang
    Miao, Yubin
    COMPUTERS & GRAPHICS-UK, 2024, 118 : 111 - 122
  • [37] Low Overlapping Point Cloud Registration Using Mutual Prior Based Completion Network
    Liu, Yazhou
    Liu, Zhiyong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 4781 - 4795
  • [38] Point cloud completion network for 3D shapes with morphologically diverse structures
    Si, Chun-Jing
    Yin, Zhi-Ben
    Fan, Zhen-Qi
    Liu, Fu-Yong
    Niu, Rong
    Yao, Na
    Shen, Shi-Quan
    Shi, Ming-Deng
    Xi, Ya-Jun
    COMPLEX & INTELLIGENT SYSTEMS, 2024, 10 (03) : 3389 - 3409
  • [39] Point cloud completion network for 3D shapes with morphologically diverse structures
    Chun-Jing Si
    Zhi-Ben Yin
    Zhen-Qi Fan
    Fu-Yong Liu
    Rong Niu
    Na Yao
    Shi-Quan Shen
    Ming-Deng Shi
    Ya-Jun Xi
    Complex & Intelligent Systems, 2024, 10 : 3389 - 3409
  • [40] Point Patches Contrastive Learning for Enhanced Point Cloud Completion
    Fei, Ben
    Liu, Liwen
    Luo, Tianyue
    Yang, Weidong
    Ma, Lipeng
    Li, Zhijun
    Chen, Wen-Ming
    IEEE TRANSACTIONS ON MULTIMEDIA, 2025, 27 : 581 - 596