3D Point Cloud Generation to Understand Real Object Structure via Graph Convolutional Networks

被引:0
作者
Ashfaq, Hamid [1 ]
Alazeb, Abdulwahab [2 ]
Almakdi, Sultan [2 ]
Alshehri, Mohammed S. [2 ]
Almujally, Nouf Abdullah [3 ]
Rlotaibi, Sard S. [4 ]
Algarni, Asaad [5 ]
Jalal, Ahmad [1 ,6 ]
机构
[1] Air Univ, Dept Comp Sci, E-9, Islamabad 44000, Pakistan
[2] Najran Univ, Coll Comp Sci & Informat Syst, Dept Comp Sci, Najran 55461, Saudi Arabia
[3] Princess Nourah Bint Abdulrahman Univ, Coll Comp & Informat Sci, Dept Informat Syst, Riyadh 11671, Saudi Arabia
[4] King Saud Univ, Informat Technol Dept, Riyadh 24382, Saudi Arabia
[5] Northern Border Univ, Fac Comp & Informat Technol, Dept Comp Sci, Rafha 91911, Saudi Arabia
[6] Korea Univ, Coll Informat, Dept Comp Sci & Engn, Seoul 02841, South Korea
关键词
point cloud; 3D model reconstruction; generative adversarial network; graph; convolution network; real object structure; RECONSTRUCTION; STEREO; ACCURATE;
D O I
10.18280/ts.410613
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Estimating and generating a three-dimensional (3D) model from a single image are challenging problems that have gained considerable attention from researchers in different fields of computer vision and artificial intelligence. Previously, there has been research work on single-angle and multi-view object use for 3D reconstruction. 3D data can be represented in many forms like meshes, voxels, and point clouds. This article presents 3D reconstruction using standard and state-of-the-art methods. Conventionally, to estimate the 3D many systems investigate multi-view images, stereo images, or object scanning with the support of additional sensors like Light Detection and Ranging (LiDAR) and depth sensors. The proposed semi-neural network system is the blend of neural network and also image processing filters and machine learning algorithms to extract features that have been used in the network. Three different types of features have been used in this paper that will help to estimate the 3D of the object from a single image. These features include semantic segmentation, depth of image, and surface normal. Semantic segmentation features have been extracted from the segmentation filter that has been exploited for extracting the object portion. Similarly, depth features have been used to estimate the object in the z-axis from NYUv2 dataset training using SENET-154 architecture. Finally surface normal features have been extracted based on estimated depth results using edge detection, and horizontal and vertical convolutional filters. Surface normal helps in determining the x, y and, z orientations of an object. The final representation of the object model has been in the form of a 3D point cloud. The resultant 3D point cloud has made it easy to analyze the model quality by points and distance representing intermodal and ground truth. In this article, three publicly available benchmark datasets have been used for system evaluation and experimental assessment including ShapeNetCore, ModelNet10 and ObjectNet3D datasets. The ShapeNetCore has archived an accuracy of 95.41% and chamfer distance of 0.00098, the ModelNet10 dataset has achieved an accuracy of 94.74% and chamfer distance of 0.00132 and finally, the ObjectNet3D dataset has achieved an accuracy of 95.53% and chamfer distance 0.00091. The results of many classes of the proposed system are outstanding at visualization as compared to standard methods.
引用
收藏
页码:2935 / 2946
页数:12
相关论文
共 50 条
  • [21] 2D&3DHNet for 3D Object Classification in LiDAR Point Cloud
    Song, Wei
    Li, Dechao
    Sun, Su
    Zhang, Lingfeng
    Xin, Yu
    Sung, Yunsick
    Choi, Ryong
    [J]. REMOTE SENSING, 2022, 14 (13)
  • [22] Two-Layer-Graph Clustering for Real-Time 3D LiDAR Point Cloud Segmentation
    Yang, Haozhe
    Wang, Zhiling
    Lin, Linglong
    Liang, Huawei
    Huang, Weixin
    Xu, Fengyu
    [J]. APPLIED SCIENCES-BASEL, 2020, 10 (23): : 1 - 18
  • [23] MORE FOR LESS: INSIGHTS INTO CONVOLUTIONAL NETS FOR 3D POINT CLOUD RECOGNITION
    Shafiq, Usama
    Taj, Murtaza
    Ali, Mohsen
    [J]. 2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 1607 - 1611
  • [24] 2D TO 3D LABEL PROPAGATION FOR OBJECT DETECTION IN POINT CLOUD
    Lertniphonphan, Kanokphan
    Komorita, Satoshi
    Tasaka, Kazuyuki
    Yanagihara, Hiromasa
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW 2018), 2018,
  • [25] Adaptive Multiview Graph Convolutional Network for 3-D Point Cloud Classification and Segmentation
    Niu, Wanhao
    Wang, Haowen
    Zhuang, Chungang
    [J]. IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2024, 16 (06) : 2043 - 2054
  • [26] Exploring the Devil in Graph Spectral Domain for 3D Point Cloud Attacks
    Hu, Qianjiang
    Liu, Daizong
    Hu, Wei
    [J]. COMPUTER VISION - ECCV 2022, PT III, 2022, 13663 : 229 - 248
  • [27] 3D Object Recognition Method Based on Point Cloud Sequential Coding
    Dong, Shuai
    Ren, Li
    Zou, Kun
    Li, Wensheng
    [J]. ICMLC 2020: 2020 12TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND COMPUTING, 2018, : 297 - 300
  • [28] 3D object detection based on point cloud in automatic driving scene
    Hai-Sheng Li
    Yan-Ling Lu
    [J]. Multimedia Tools and Applications, 2024, 83 : 13029 - 13044
  • [29] 3D object detection based on point cloud in automatic driving scene
    Li, Hai-Sheng
    Lu, Yan-Ling
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (05) : 13029 - 13044
  • [30] Detection based object labeling of 3D point cloud for indoor scenes
    Liu, Wei
    Li, Shaozi
    Cao, Donglin
    Su, Songzhi
    Ji, Rongrong
    [J]. NEUROCOMPUTING, 2016, 174 : 1101 - 1106