3D Point Cloud Generation to Understand Real Object Structure via Graph Convolutional Networks

被引:0
|
作者
Ashfaq, Hamid [1 ]
Alazeb, Abdulwahab [2 ]
Almakdi, Sultan [2 ]
Alshehri, Mohammed S. [2 ]
Almujally, Nouf Abdullah [3 ]
Rlotaibi, Sard S. [4 ]
Algarni, Asaad [5 ]
Jalal, Ahmad [1 ,6 ]
机构
[1] Air Univ, Dept Comp Sci, E-9, Islamabad 44000, Pakistan
[2] Najran Univ, Coll Comp Sci & Informat Syst, Dept Comp Sci, Najran 55461, Saudi Arabia
[3] Princess Nourah Bint Abdulrahman Univ, Coll Comp & Informat Sci, Dept Informat Syst, Riyadh 11671, Saudi Arabia
[4] King Saud Univ, Informat Technol Dept, Riyadh 24382, Saudi Arabia
[5] Northern Border Univ, Fac Comp & Informat Technol, Dept Comp Sci, Rafha 91911, Saudi Arabia
[6] Korea Univ, Coll Informat, Dept Comp Sci & Engn, Seoul 02841, South Korea
关键词
point cloud; 3D model reconstruction; generative adversarial network; graph; convolution network; real object structure; RECONSTRUCTION; STEREO; ACCURATE;
D O I
10.18280/ts.410613
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Estimating and generating a three-dimensional (3D) model from a single image are challenging problems that have gained considerable attention from researchers in different fields of computer vision and artificial intelligence. Previously, there has been research work on single-angle and multi-view object use for 3D reconstruction. 3D data can be represented in many forms like meshes, voxels, and point clouds. This article presents 3D reconstruction using standard and state-of-the-art methods. Conventionally, to estimate the 3D many systems investigate multi-view images, stereo images, or object scanning with the support of additional sensors like Light Detection and Ranging (LiDAR) and depth sensors. The proposed semi-neural network system is the blend of neural network and also image processing filters and machine learning algorithms to extract features that have been used in the network. Three different types of features have been used in this paper that will help to estimate the 3D of the object from a single image. These features include semantic segmentation, depth of image, and surface normal. Semantic segmentation features have been extracted from the segmentation filter that has been exploited for extracting the object portion. Similarly, depth features have been used to estimate the object in the z-axis from NYUv2 dataset training using SENET-154 architecture. Finally surface normal features have been extracted based on estimated depth results using edge detection, and horizontal and vertical convolutional filters. Surface normal helps in determining the x, y and, z orientations of an object. The final representation of the object model has been in the form of a 3D point cloud. The resultant 3D point cloud has made it easy to analyze the model quality by points and distance representing intermodal and ground truth. In this article, three publicly available benchmark datasets have been used for system evaluation and experimental assessment including ShapeNetCore, ModelNet10 and ObjectNet3D datasets. The ShapeNetCore has archived an accuracy of 95.41% and chamfer distance of 0.00098, the ModelNet10 dataset has achieved an accuracy of 94.74% and chamfer distance of 0.00132 and finally, the ObjectNet3D dataset has achieved an accuracy of 95.53% and chamfer distance 0.00091. The results of many classes of the proposed system are outstanding at visualization as compared to standard methods.
引用
收藏
页码:2935 / 2946
页数:12
相关论文
共 50 条
  • [1] Structure aware 3D single object tracking of point cloud
    Zhou, Xiaoyu
    Wang, Ling
    Yuan, Zhian
    Xu, Ke
    Ma, Yanxin
    JOURNAL OF ELECTRONIC IMAGING, 2021, 30 (04)
  • [2] Analysis of the structure material of the bronze object in 3D models point cloud
    Drofova, Irena
    Adamek, Milan
    PRZEGLAD ELEKTROTECHNICZNY, 2022, 98 (03): : 97 - 101
  • [3] Dynamic-Scale Graph Convolutional Network for Semantic Segmentation of 3D Point Cloud
    Xiu, Haoyi
    Shinohara, Takayuki
    Matsuoka, Masashi
    2019 IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA (ISM 2019), 2019, : 271 - 278
  • [4] Wireless 3D Point Cloud Delivery Using Deep Graph Neural Networks
    Fujihashi, Takuya
    Koike-Akino, Toshiaki
    Chen, Siheng
    Watanabe, Takashi
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,
  • [5] Bioinspired point cloud representation: 3D object tracking
    Orts-Escolano, Sergio
    Garcia-Rodriguez, Jose
    Cazorla, Miguel
    Morell, Vicente
    Azorin, Jorge
    Saval, Marcelo
    Garcia-Garcia, Alberto
    Villena, Victor
    NEURAL COMPUTING & APPLICATIONS, 2018, 29 (09) : 663 - 672
  • [6] Bioinspired point cloud representation: 3D object tracking
    Sergio Orts-Escolano
    Jose Garcia-Rodriguez
    Miguel Cazorla
    Vicente Morell
    Jorge Azorin
    Marcelo Saval
    Alberto Garcia-Garcia
    Victor Villena
    Neural Computing and Applications, 2018, 29 : 663 - 672
  • [7] Object Volume Estimation Based on 3D Point Cloud
    Chang, Wen-Chung
    Wu, Chia-Hung
    Tsai, Ya-Hui
    Chiu, Wei-Yao
    2017 INTERNATIONAL AUTOMATIC CONTROL CONFERENCE (CACS), 2017,
  • [8] PU-FPG: Point cloud upsampling via form preserving graph convolutional networks
    Wang, Haochen
    Zhang, Changlun
    Chen, Shuang
    Wang, Hengyou
    He, Qiang
    Mu, Haibing
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2023, 45 (05) : 8595 - 8612
  • [9] PointGAT: Graph attention networks for 3D object detection
    Zhou H.
    Wang W.
    Liu G.
    Zhou Q.
    Intelligent and Converged Networks, 2022, 3 (02): : 204 - 216
  • [10] Octant Convolutional Neural Network for 3D Point Cloud Analysis
    Xu X.
    Shuai H.
    Liu Q.-S.
    Zidonghua Xuebao/Acta Automatica Sinica, 2021, 47 (12): : 2791 - 2800