Research on Three-Dimensional Point Cloud Reconstruction Method Based on Graph Neural Networks

被引:0
作者
Ma, Ruocheng [1 ]
Gao, Xiang [2 ]
Song, Zhaoxiang [3 ]
机构
[1] Beijing Qihu Technol Co Ltd, Technol Ctr, Beijing, Peoples R China
[2] Xian Technol Univ, Sch Comp Sci & Engn, Xian, Peoples R China
[3] Northwest Univ, Sch Publ Adm, Xian, Peoples R China
来源
2024 3RD INTERNATIONAL CONFERENCE ON IMAGE PROCESSING AND MEDIA COMPUTING, ICIPMC 2024 | 2024年
关键词
Deep learning; Three-dimensional reconstruction; Point cloud; GCN;
D O I
10.1109/ICIPMC62364.2024.10586702
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The widespread utilization of three-dimensional reconstruction technology across various domains such as medicine, architecture, and transportation has led to a growing demand for precise positioning and high-precision modeling of targets. Three-dimensional reconstruction methodologies leveraging deep learning techniques have demonstrated significant advantages. Nonetheless, traditional three-dimensional reconstruction networks often encounter challenges related to loss of intricate image features, low point cloud density, and susceptibility to generating voids, consequently impacting quality and accuracy of three-dimensional reconstructions. To combat this challenge, this study introduces an algorithm based on graph neural networks for dynamically selecting central points as a replacement for the original point cloud enhancement strategy. Initially, the algorithm employs CAS method to identify appropriate central points that cover expanded spatial volumes within their respective neighborhoods. Subsequently, PointFlow algorithm is applied to forecast point clouds at these central points. The resulting point clouds are then refined by integrating original point cloud segments with interpolated point cloud segments, culminating in comprehensive, high-density threedimensional point clouds representing the target scene. Relative to Point-MVSNet, the algorithm presented in this paper demonstrates a substantial 6.5% reduction in the average error of the reconstructed three-dimensional models. Notably, the resulting point cloud density and fidelity are elevated, showcasing richer detailed features while also exhibiting lower resource consumption compared to alternative three-dimensional reconstruction algorithms.
引用
收藏
页码:106 / 113
页数:8
相关论文
共 10 条
[1]   Point-Based Multi-View Stereo Network [J].
Chen, Rui ;
Han, Songfang ;
Xu, Jing ;
Su, Hao .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :1538-1547
[2]   Cascade Cost Volume for High-Resolution Multi-View Stereo and Stereo Matching [J].
Gu, Xiaodong ;
Fan, Zhiwen ;
Zhu, Siyu ;
Dai, Zuozhuo ;
Tan, Feitong ;
Tan, Ping .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :2492-2501
[3]  
Qi Charles Ruizhongtai, 2017, PROC 31 INT C NEURAL
[4]   3D Graph Neural Networks for RGBD Semantic Segmentation [J].
Qi, Xiaojuan ;
Liao, Renjie ;
Jia, Jiaya ;
Fidler, Sanja ;
Urtasun, Raquel .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :5209-5218
[5]  
Roberts L. G., 1963, MACHINE PERCEPTION OF THREEDIMENSIONAL SOLIDS
[6]  
Seitz S.M., 2006, C COMPUTER VISION PA, VVolume 1, P519, DOI DOI 10.1109/CVPR.2006.19
[7]  
Su J. -W., 2020, arXiv, DOI [DOI 10.1109/CVPR42600.2020.00799, DOI 10.1109/CVPR42600.2020.01009]
[8]   Dynamic Graph CNN for Learning on Point Clouds [J].
Wang, Yue ;
Sun, Yongbin ;
Liu, Ziwei ;
Sarma, Sanjay E. ;
Bronstein, Michael M. ;
Solomon, Justin M. .
ACM TRANSACTIONS ON GRAPHICS, 2019, 38 (05)
[9]   MVSNet: Depth Inference for Unstructured Multi-view Stereo [J].
Yao, Yao ;
Luo, Zixin ;
Li, Shiwei ;
Fang, Tian ;
Quan, Long .
COMPUTER VISION - ECCV 2018, PT VIII, 2018, 11212 :785-801
[10]   Recurrent MVSNet for High-resolution Multi-view Stereo Depth Inference [J].
Yao, Yao ;
Luo, Zixin ;
Li, Shiwei ;
Shen, Tianwei ;
Fang, Tian ;
Quan, Long .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :5520-5529