Performance of Graph Neural Networks for Point Cloud Applications

被引:1
|
作者
Parikh, Dhruv [1 ]
Zhang, Bingyi [1 ]
Kannan, Rajgopal [2 ]
Prasanna, Viktor [1 ]
Busart, Carl [2 ]
机构
[1] Univ Southern Calif, Los Angeles, CA 90007 USA
[2] DEVCOM US Army Res Lab, Adelphi, MD USA
来源
2023 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE, HPEC | 2023年
关键词
Graph neural network; point cloud; k-nearest neighbors; dynamic graph construction; performance profiling;
D O I
10.1109/HPEC58863.2023.10363595
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Graph Neural Networks (GNNs) have gained significant momentum recently due to their capability to learn on unstructured graph data. Dynamic GNNs (DGNNs) are the current state-of-the-art for point cloud applications; such applications (viz. autonomous driving) require real-time processing at the edge with tight latency and memory constraints. Conducting performance analysis on such DGNNs, thus, becomes a crucial task to evaluate network suitability. This paper presents a profiling analysis of EdgeConv-based DGNNs applied to point cloud inputs. We assess their inference performance in terms of end-to-end latency and memory consumption on state-of-the-art CPU and GPU platforms. The EdgeConv layer has two stages: (1) dynamic graph generation using k-Nearest Neighbors (kNN) and, (2) node feature updation. The addition of dynamic graph generation via kNN in each (EdgeConv) layer enhances network performance compared to networks that work with the same static graph in each layer; such performance enhancement comes, however, at the added computational cost associated with the dynamic graph generation stage (via kNN algorithm). Understanding its costs is essential for identifying the performance bottleneck and exploring potential avenues for hardware acceleration. To this end, this paper aims to shed light on the performance characteristics of EdgeConv-based DGNNs for point cloud inputs. Our performance analysis on a state-of-the-art EdgeConv network for classification shows that the dynamic graph construction via kNN takes up upwards of 95% of network latency on the GPU and almost 90% on the CPU. Moreover, we propose a quasi-Dynamic Graph Neural Network (qDGNN) that halts dynamic graph updates after a specific depth within the network to significantly reduce the latency on both CPU and GPU whilst matching the original networks inference accuracy.
引用
收藏
页数:7
相关论文
共 50 条
  • [31] Learning graph normalization for graph neural networks
    Chen, Yihao
    Tang, Xin
    Qi, Xianbiao
    Li, Chun-Guang
    Xiao, Rong
    NEUROCOMPUTING, 2022, 493 : 613 - 625
  • [32] Graph Regulation Network for Point Cloud Segmentation
    Du, Zijin
    Liang, Jianqing
    Liang, Jiye
    Yao, Kaixuan
    Cao, Feilong
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (12) : 7940 - 7955
  • [33] A graph neural network model application in point cloud structure for prolonged sitting detection system based on smartphone sensor data
    Hardjianto, Mardi
    Istiyanto, Jazi Eko
    Tjoa, A. Min
    Syahrulfath, Arfa Shaha
    Purnama, Satriawan Rasyid
    Sari, Rifda Hakima
    Hakim, Zaidan
    Fuadin, M. Ridho
    Ananto, Nias
    ETRI JOURNAL, 2025,
  • [34] A Graph Neural Network Approach for Caching Performance Optimization in NDN Networks
    Hou, Jiacheng
    Xia, Huanzhang
    Lu, Haoye
    Nayak, Amiya
    IEEE ACCESS, 2022, 10 : 112657 - 112668
  • [35] PU-FPG: Point cloud upsampling via form preserving graph convolutional networks
    Wang, Haochen
    Zhang, Changlun
    Chen, Shuang
    Wang, Hengyou
    He, Qiang
    Mu, Haibing
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2023, 45 (05) : 8595 - 8612
  • [36] Introduction to Graph Neural Networks
    Liu Z.
    Zhou J.
    1600, Morgan and Claypool Publishers (14): : 1 - 127
  • [37] Graph-based Point Cloud Denoising
    Gao, Xiang
    Hu, Wei
    Guo, Zongming
    2018 IEEE FOURTH INTERNATIONAL CONFERENCE ON MULTIMEDIA BIG DATA (BIGMM), 2018,
  • [38] A survey of graph neural networks in various learning paradigms: methods, applications, and challenges
    Lilapati Waikhom
    Ripon Patgiri
    Artificial Intelligence Review, 2023, 56 : 6295 - 6364
  • [39] A survey of graph neural networks in various learning paradigms: methods, applications, and challenges
    Waikhom, Lilapati
    Patgiri, Ripon
    ARTIFICIAL INTELLIGENCE REVIEW, 2023, 56 (07) : 6295 - 6364
  • [40] Robust indoor point cloud classification by fusing LSTM neural networks with supervoxel clustering
    Li, M. J.
    Wang, L. H.
    Cai, Z. H.
    Yang, M. S.
    Wu, R. J.
    Yao, M. M.
    XXIV ISPRS CONGRESS IMAGING TODAY, FORESEEING TOMORROW, COMMISSION II, 2022, 43-B2 : 221 - 227