Efficient Training of Graph Neural Networks on Large Graphs

被引:0
作者
Shen, Yanyan [1 ]
Chen, Lei [2 ,3 ]
Fang, Jingzhi [2 ]
Zhang, Xin [2 ]
Gao, Shihong [2 ]
Yin, Hongbo [2 ,3 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
[2] HKUST, Hong Kong, Peoples R China
[3] HKUST GZ, Guangzhou, Peoples R China
来源
PROCEEDINGS OF THE VLDB ENDOWMENT | 2024年 / 17卷 / 12期
基金
美国国家科学基金会;
关键词
D O I
10.14778/3685800.3685844
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Graph Neural Networks (GNNs) have gained significant popularity for learning representations of graph-structured data. Mainstream GNNs employ the message passing scheme that iteratively propagates information between connected nodes through edges. However, this scheme incurs high training costs, hindering applicability of GNNs on large graphs. Recently, the database community has extensively researched effective solutions to facilitate efficient GNN training on massive graphs. In this tutorial, we vide a comprehensive overview of the GNN training process based on the graph data lifecycle, covering graph preprocessing, batch generation, data transfer, and model training stages. We discuss recent data management efforts aiming at accelerating individual stages or improving the overall training efficiency. Recognizing distinct training issues associated with static and dynamic graphs, we first focus on efficient GNN training on static graphs, followed by an exploration of training GNNs on dynamic graphs. Finally, we suggest some potential research directions in this area. believe this tutorial is valuable for researchers and practitioners to understand the bottleneck of GNN training and the advanced data management techniques to accelerate the training of different GNNs on massive graphs in diverse hardware settings.
引用
收藏
页码:4237 / 4240
页数:4
相关论文
共 50 条
  • [21] A Comprehensive Survey of Graph Neural Networks for Knowledge Graphs
    Ye, Zi
    Kumar, Yogan Jaya
    Sing, Goh Ong
    Song, Fengyan
    Wang, Junsong
    IEEE ACCESS, 2022, 10 : 75729 - 75741
  • [22] How hard is to distinguish graphs with graph neural networks?
    Loukas, Andreas
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [23] Adaptive Parallel Training for Graph Neural Networks
    Ma, Kaihao
    Liu, Renjie
    Yan, Xiao
    Cai, Zhenkun
    Song, Xiang
    Wang, Minjie
    Li, Yichao
    Cheng, James
    PROCEEDINGS OF THE 2025 THE 30TH ACM SIGPLAN ANNUAL SYMPOSIUM ON PRINCIPLES AND PRACTICE OF PARALLEL PROGRAMMING, PPOPP 2025, 2025, : 29 - 42
  • [24] Training Graph Neural Networks by Graphon Estimation
    Hu, Ziqing
    Fang, Yihao
    Lin, Lizhen
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 5153 - 5162
  • [25] Convergence of Graph Neural Networks on Relatively Sparse Graphs
    Wang, Zhiyang
    Ruiz, Luana
    Ribeiro, Alejandro
    FIFTY-SEVENTH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, IEEECONF, 2023, : 566 - 572
  • [26] MariusGNN: Resource-Efficient Out-of-Core Training of Graph Neural Networks
    Waleffe, Roger
    Mohoney, Jason
    Rekatsinas, Theodoros
    Venkataraman, Shivaram
    PROCEEDINGS OF THE EIGHTEENTH EUROPEAN CONFERENCE ON COMPUTER SYSTEMS, EUROSYS 2023, 2023, : 144 - 161
  • [27] Training Graph Neural Networks with 1000 Layers
    Li, Guohao
    Muller, Matthias
    Ghanem, Bernard
    Koltun, Vladlen
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [28] Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks
    Chiang, Wei-Lin
    Liu, Xuanqing
    Si, Si
    Li, Yang
    Bengio, Samy
    Hsieh, Cho-Jui
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 257 - 266
  • [29] Crafting Efficient Neural Graph of Large Entropy
    Dong, Minjing
    Chen, Hanting
    Wang, Yunhe
    Xu, Chang
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 2244 - 2250
  • [30] An efficient segmented quantization for graph neural networks
    Dai, Yue
    Tang, Xulong
    Zhang, Youtao
    CCF TRANSACTIONS ON HIGH PERFORMANCE COMPUTING, 2022, 4 (04) : 461 - 473