3D Point Cloud Generation to Understand Real Object Structure via Graph Convolutional Networks

被引:0
作者
Ashfaq, Hamid [1 ]
Alazeb, Abdulwahab [2 ]
Almakdi, Sultan [2 ]
Alshehri, Mohammed S. [2 ]
Almujally, Nouf Abdullah [3 ]
Rlotaibi, Sard S. [4 ]
Algarni, Asaad [5 ]
Jalal, Ahmad [1 ,6 ]
机构
[1] Air Univ, Dept Comp Sci, E-9, Islamabad 44000, Pakistan
[2] Najran Univ, Coll Comp Sci & Informat Syst, Dept Comp Sci, Najran 55461, Saudi Arabia
[3] Princess Nourah Bint Abdulrahman Univ, Coll Comp & Informat Sci, Dept Informat Syst, Riyadh 11671, Saudi Arabia
[4] King Saud Univ, Informat Technol Dept, Riyadh 24382, Saudi Arabia
[5] Northern Border Univ, Fac Comp & Informat Technol, Dept Comp Sci, Rafha 91911, Saudi Arabia
[6] Korea Univ, Coll Informat, Dept Comp Sci & Engn, Seoul 02841, South Korea
关键词
point cloud; 3D model reconstruction; generative adversarial network; graph; convolution network; real object structure; RECONSTRUCTION; STEREO; ACCURATE;
D O I
10.18280/ts.410613
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Estimating and generating a three-dimensional (3D) model from a single image are challenging problems that have gained considerable attention from researchers in different fields of computer vision and artificial intelligence. Previously, there has been research work on single-angle and multi-view object use for 3D reconstruction. 3D data can be represented in many forms like meshes, voxels, and point clouds. This article presents 3D reconstruction using standard and state-of-the-art methods. Conventionally, to estimate the 3D many systems investigate multi-view images, stereo images, or object scanning with the support of additional sensors like Light Detection and Ranging (LiDAR) and depth sensors. The proposed semi-neural network system is the blend of neural network and also image processing filters and machine learning algorithms to extract features that have been used in the network. Three different types of features have been used in this paper that will help to estimate the 3D of the object from a single image. These features include semantic segmentation, depth of image, and surface normal. Semantic segmentation features have been extracted from the segmentation filter that has been exploited for extracting the object portion. Similarly, depth features have been used to estimate the object in the z-axis from NYUv2 dataset training using SENET-154 architecture. Finally surface normal features have been extracted based on estimated depth results using edge detection, and horizontal and vertical convolutional filters. Surface normal helps in determining the x, y and, z orientations of an object. The final representation of the object model has been in the form of a 3D point cloud. The resultant 3D point cloud has made it easy to analyze the model quality by points and distance representing intermodal and ground truth. In this article, three publicly available benchmark datasets have been used for system evaluation and experimental assessment including ShapeNetCore, ModelNet10 and ObjectNet3D datasets. The ShapeNetCore has archived an accuracy of 95.41% and chamfer distance of 0.00098, the ModelNet10 dataset has achieved an accuracy of 94.74% and chamfer distance of 0.00132 and finally, the ObjectNet3D dataset has achieved an accuracy of 95.53% and chamfer distance 0.00091. The results of many classes of the proposed system are outstanding at visualization as compared to standard methods.
引用
收藏
页码:2935 / 2946
页数:12
相关论文
共 50 条
[31]   An Efficient Point Cloud Correlation Enhancement RCNN for 3D Object Detection [J].
Du, Jialong ;
Huang, Hanzhang ;
Tan, Qingji ;
Li, Yong ;
Ding, Lu ;
Shuang, Feng .
INFORMATION TECHNOLOGY AND CONTROL, 2025, 54 (01) :198-218
[32]   Detection based object labeling of 3D point cloud for indoor scenes [J].
Liu, Wei ;
Li, Shaozi ;
Cao, Donglin ;
Su, Songzhi ;
Ji, Rongrong .
NEUROCOMPUTING, 2016, 174 :1101-1106
[33]   PPMGNet: A Neural Network Algorithm for Point Cloud 3D Object Detection [J].
Peng, Xiaohong ;
Wang, Sen ;
Geng, Shuqin ;
Zhang, Zhe ;
Tang, Haonan ;
Wang, Yu ;
Wang, Jie ;
Li, Xuefeng ;
Du, Jianing .
2020 IEEE 14TH INTERNATIONAL CONFERENCE ON ANTI-COUNTERFEITING, SECURITY, AND IDENTIFICATION (ASID), 2020, :53-56
[34]   3D Point Cloud Object Detection on Edge Devices for Split Computing [J].
Noguchi, Taisuke ;
Azumi, Takuya .
2024 IEEE 3RD REAL-TIME AND INTELLIGENT EDGE COMPUTING WORKSHOP, RAGE 2024, 2024, :6-11
[35]   Exploring Point-BEV Fusion for 3D Point Cloud Object Tracking With Transformer [J].
Luo, Zhipeng ;
Zhou, Changqing ;
Pan, Liang ;
Zhang, Gongjie ;
Liu, Tianrui ;
Luo, Yueru ;
Zhao, Haiyu ;
Liu, Ziwei ;
Lu, Shijian .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (09) :5921-5935
[36]   No-Reference Point Cloud Quality Assessment via Graph Convolutional Network [J].
Chen, Wu ;
Jiang, Qiuping ;
Zhou, Wei ;
Shao, Feng ;
Zhai, Guangtao ;
Lin, Weisi .
IEEE TRANSACTIONS ON MULTIMEDIA, 2025, 27 :2489-2502
[37]   3D point cloud denoising via Gaussian processes regression [J].
Kim, Ickbum ;
Singh, Sandeep K. .
VISUAL COMPUTER, 2025,
[38]   DSP-Net: Dense-to-Sparse Proposal Generation Approach for 3D Object Detection on Point Cloud [J].
Yan, Xinrui ;
Huang, Yuhao ;
Chen, Shitao ;
Nan, Zhixiong ;
Xin, Jingmin ;
Zheng, Nanning .
2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
[39]   3D IMAGE SEGMENTATION SUPPORTED BY A POINT CLOUD [J].
Kosa, Balazs ;
Mikula, Karol ;
Uba, Markjoe Olunna ;
Weberling, Antonia ;
Christodoulou, Neophytos ;
Zernicka-Goetz, Magdalena .
DISCRETE AND CONTINUOUS DYNAMICAL SYSTEMS-SERIES S, 2021, 14 (03) :971-985
[40]   A Graphical Convolutional Network-based Method for 3D Point Cloud Classification [J].
Wang, Liang ;
Li, Jianshu ;
Pan, Deqiao .
PROCEEDINGS OF THE 33RD CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2021), 2021, :1686-1691