3D Point Cloud Dual Completion Network

被引:1
作者
Wu, Meng [1 ,2 ]
Yan, Ruiqi [1 ]
Sun, Zengguo [3 ]
Zhao, Huaidong [2 ,4 ]
He, Qianping [1 ]
机构
[1] School of Information and Control Engineering, Xi’an University of Architecture and Technology, Xi’an
[2] Institute for Interdisciplinary Innovate Research, Xi’an University of Architecture and Technology, Xi’an
[3] School of Computer Science, Shaanxi Normal University, Xi’an
[4] School of Art, Xi’an University of Architecture and Technology, Xi’an
关键词
dual-network; feature expansion; multi-head attention; point cloud completion;
D O I
10.3778/j.issn.1002-8331.2311-0299
中图分类号
学科分类号
摘要
Due to factors such as sensor performance and acquisition obstructions, continuous gaps in point cloud collection can significantly impact the representation of the form and structure of the captured object. Therefore, completing the point cloud becomes an essential task in three-dimensional point cloud analysis. Currently, employing an encoder-decoder framework to extract global features and predict complete point clouds in point cloud completion networks can disrupt the geometric structure of input point clouds, leading to displacement loss and uneven distribution of points within the cloud. To address these issues, a dual-stage point cloud completion network has been developed. The first stage generates coarse point clouds, followed by the refinement stage that produces finer point clouds. This approach incorporates a global feature perception module based on multi-head self-attention and a feature expansion module. By leveraging attention mechanisms to aggregate point cloud features, it enhances the feature dimensions of each point, effectively improving the inter-point correlations. Subsequently, within the encoder-decoder framework, these modules are utilized to enhance both the completeness and refinement of point cloud completion, effectively restoring the structure of the point cloud. Experimental tests have demonstrated that on the PCN dataset and Completion3D dataset, the average Chamfer distance achieves 5.98 × 10-3 and 6.11 × 10-3 , respectively. The visual results show superior performance compared to other methods in the visualization results. © 2025 Journal of Computer Engineering and Applications Beijing Co., Ltd.; Science Press. All rights reserved.
引用
收藏
页码:297 / 305
页数:8
相关论文
共 36 条
[1]  
BERGER M, TAGLIASACCHI A, SEVERSKY L M, Et al., State of the art in surface reconstruction from point clouds, Eurographics, 1, pp. 161-185, (2014)
[2]  
DAVIS J, MARSCHNER S R, GARR M, Et al., Filling holes in complex surfaces using volumetric diffusion, Proceedings of the 1st International Symposium on 3D Data Processing Visualization and Transmission, pp. 428-441, (2002)
[3]  
MITRA N J, GUIBAS L J, PAULY M, Et al., Partial and approximate symmetry detection for 3D geometry, ACM Transactions on Graphics, 25, 3, pp. 560-568, (2006)
[4]  
MITRA N J, PAULY M, WAND M, Et al., Symmetry in 3D geometry: extraction and applications, Computer Graphics Forum, 32, 6, pp. 1-23, (2013)
[5]  
DAI A, QI C R, NIEssNER M., Shape completion using 3D-encoder- predictor CNNs and shape synthesis, Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, pp. 5868-5877, (2017)
[6]  
WANG W Y, HUANG Q G, YOU S Y, Et al., Shape inpainting using 3D generative adversarial network and recurrent convolutional networks, Proceedings of the 2017 IEEE International Conference on Computer Vision, pp. 2298-2306, (2017)
[7]  
YANG Y Q, FENG C, SHEN Y R, Et al., FoldingNet: point cloud auto- encoder via deep grid deformation, Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 206-215, (2018)
[8]  
CHARLES R Q, HAO S, MO K C, Et al., PointNet: deep learning on point sets for 3D classification and segmentation, Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, pp. 652-660, (2017)
[9]  
QI C R, YI L, SU H, Et al., PointNet ++, Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 5105-5114, (2017)
[10]  
LI X K, LI C S, TONG Z K, Et al., Campus3D: a photogrammetry point cloud benchmark for hierarchical understanding of outdoor scene, Proceedings of the 28th ACM International Conference on Multimedia, pp. 238-246, (2020)