Context-Based Point Generation Network for Point Cloud Completion

被引:0
作者
Lu, Lihua [1 ,2 ,3 ]
Li, Ruyang [1 ,2 ]
Wei, Hui [1 ,2 ]
Zhao, Yagqian [1 ,2 ]
Li, Rengang [1 ,2 ]
机构
[1] Inspur Elect Informat Ind Co Ltd, Jinan, Peoples R China
[2] Inspur Beijing Elect Informat Ind Co Ltd, Beijing, Peoples R China
[3] Shandong Mass Informat Technol Res Inst, Jinan, Peoples R China
来源
NEURAL INFORMATION PROCESSING, PT I, ICONIP 2022 | 2023年 / 13623卷
关键词
Point Cloud Completion; Context-based Point Transformation; Center Point Generation; Point Context Extraction;
D O I
10.1007/978-3-031-30105-6_37
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing sparse-to-dense methods for point cloud completion generally focus on designing refinement and expansion modules to expand the point cloud from sparse to dense. This ignores to preserve a well-performed generation process for the points at the sparse level, which leads to the loss of shape priors to the dense point cloud. To resolve this challenge, we introduce Transformer to both feature extraction and point generation processes, and propose a Context-based Point Generation Network (CPGNet) with Point Context Extraction (PCE) and Context-based Point Transformation (CPT) to control the point generation process at the sparse level. Our CPGNet can infer the missing point clouds at the sparse level via PCE and CPT blocks, which provide the well-arranged center points for generating the dense point clouds. The PCE block can extract both local and global context features of the observed points. Multiple PCE blocks in the encoder hierarchically offer geometric constraints and priors for the point completion. The CPT block can fully exploit geometric contexts existing in the observed point clouds, and then transform them into context features of the missing points. Multiple CPT blocks in the decoder progressively refine the context features, and finally generate the center points for the missing shapes. Quantitative and visual comparisons on PCN and ShapeNet-55 datasets demonstrate our model outperforms the state-of-the-art methods.
引用
收藏
页码:443 / 454
页数:12
相关论文
共 33 条
[1]   Point Convolutional Neural Networks by Extension Operators [J].
Atzmon, Matan ;
Maron, Haggai ;
Lipman, Yaron .
ACM TRANSACTIONS ON GRAPHICS, 2018, 37 (04)
[2]   Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis [J].
Dai, Angela ;
Qi, Charles Ruizhongtai ;
Niessner, Matthias .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :6545-6554
[3]  
Dosovitskiy A, 2021, Arxiv, DOI arXiv:2010.11929
[4]   SCF-Net: Learning Spatial Contextual Features for Large-Scale Point Cloud Segmentation [J].
Fan, Siqi ;
Dong, Qiulei ;
Zhu, Fenghua ;
Lv, Yisheng ;
Ye, Peijun ;
Wang, Fei-Yue .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :14499-14508
[5]  
Groueix T, 2018, Arxiv, DOI [arXiv:1802.05384, DOI 10.48550/ARXIV.1802.05384]
[6]   A Papier-Mache Approach to Learning 3D Surface Generation [J].
Groueix, Thibault ;
Fisher, Matthew ;
Kim, Vladimir G. ;
Russell, Bryan C. ;
Aubry, Mathieu .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :216-224
[7]   PCT: Point cloud transformer [J].
Guo, Meng-Hao ;
Cai, Jun-Xiong ;
Liu, Zheng-Ning ;
Mu, Tai-Jiang ;
Martin, Ralph R. ;
Hu, Shi-Min .
COMPUTATIONAL VISUAL MEDIA, 2021, 7 (02) :187-199
[8]   Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds [J].
He, Chenhang ;
Li, Ruihuang ;
Li, Shuai ;
Zhang, Lei .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :8407-8417
[9]   PF-Net: Point Fractal Network for 3D Point Cloud Completion [J].
Huang, Zitian ;
Yu, Yikuan ;
Xu, Jiawen ;
Ni, Feng ;
Le, Xinyi .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :7659-7667
[10]   Stratified Transformer for 3D Point Cloud Segmentation [J].
Lai, Xin ;
Liu, Jianhui ;
Jiang, Li ;
Wang, Liwei ;
Zhao, Hengshuang ;
Liu, Shu ;
Qi, Xiaojuan ;
Jia, Jiaya .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :8490-8499