Efficient Training of Graph Neural Networks on Large Graphs

被引:0
作者
Shen, Yanyan [1 ]
Chen, Lei [2 ,3 ]
Fang, Jingzhi [2 ]
Zhang, Xin [2 ]
Gao, Shihong [2 ]
Yin, Hongbo [2 ,3 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
[2] HKUST, Hong Kong, Peoples R China
[3] HKUST GZ, Guangzhou, Peoples R China
来源
PROCEEDINGS OF THE VLDB ENDOWMENT | 2024年 / 17卷 / 12期
基金
美国国家科学基金会;
关键词
D O I
10.14778/3685800.3685844
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Graph Neural Networks (GNNs) have gained significant popularity for learning representations of graph-structured data. Mainstream GNNs employ the message passing scheme that iteratively propagates information between connected nodes through edges. However, this scheme incurs high training costs, hindering applicability of GNNs on large graphs. Recently, the database community has extensively researched effective solutions to facilitate efficient GNN training on massive graphs. In this tutorial, we vide a comprehensive overview of the GNN training process based on the graph data lifecycle, covering graph preprocessing, batch generation, data transfer, and model training stages. We discuss recent data management efforts aiming at accelerating individual stages or improving the overall training efficiency. Recognizing distinct training issues associated with static and dynamic graphs, we first focus on efficient GNN training on static graphs, followed by an exploration of training GNNs on dynamic graphs. Finally, we suggest some potential research directions in this area. believe this tutorial is valuable for researchers and practitioners to understand the bottleneck of GNN training and the advanced data management techniques to accelerate the training of different GNNs on massive graphs in diverse hardware settings.
引用
收藏
页码:4237 / 4240
页数:4
相关论文
共 50 条
[41]   An efficient segmented quantization for graph neural networks [J].
Dai, Yue ;
Tang, Xulong ;
Zhang, Youtao .
CCF TRANSACTIONS ON HIGH PERFORMANCE COMPUTING, 2022, 4 (04) :461-473
[42]   Efficient Scaling of Dynamic Graph Neural Networks [J].
Chakaravarthy, Venkatesan T. ;
Pandian, Shivmaran S. ;
Raje, Saurabh ;
Sabharwal, Yogish ;
Suzumura, Toyotaro ;
Ubaru, Shashanka .
SC21: INTERNATIONAL CONFERENCE FOR HIGH PERFORMANCE COMPUTING, NETWORKING, STORAGE AND ANALYSIS, 2021,
[43]   TinyGNN: Learning Efficient Graph Neural Networks [J].
Yan, Bencheng ;
Wang, Chaokun ;
Guo, Gaoyang ;
Lou, Yunkai .
KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, :1848-1856
[44]   An efficient segmented quantization for graph neural networks [J].
Yue Dai ;
Xulong Tang ;
Youtao Zhang .
CCF Transactions on High Performance Computing, 2022, 4 :461-473
[45]   Graph neural networks at the Large Hadron Collider [J].
Gage DeZoort ;
Peter W. Battaglia ;
Catherine Biscarat ;
Jean-Roch Vlimant .
Nature Reviews Physics, 2023, 5 :281-303
[46]   Graph neural networks at the Large Hadron Collider [J].
DeZoort, Gage ;
Battaglia, Peter W. ;
Biscarat, Catherine ;
Vlimant, Jean-Roch .
NATURE REVIEWS PHYSICS, 2023, 5 (05) :281-303
[47]   Scaling Graph Neural Networks to Large Proteins [J].
Airas, Justin ;
Zhang, Bin .
JOURNAL OF CHEMICAL THEORY AND COMPUTATION, 2025, 21 (04) :2055-2066
[48]   POSTER: ParGNN: Efficient Training for Large-Scale Graph Neural Network on GPU Clusters [J].
Li, Shunde ;
Gu, Junyu ;
Wang, Jue ;
Yao, Tiechui ;
Liang, Zhiqiang ;
Shi, Yumeng ;
Li, Shigang ;
Xi, Weiting ;
Li, Shushen ;
Zhou, Chunbao ;
Wang, Yangang ;
Chi, Xuebin .
PROCEEDINGS OF THE 29TH ACM SIGPLAN ANNUAL SYMPOSIUM ON PRINCIPLES AND PRACTICE OF PARALLEL PROGRAMMING, PPOPP 2024, 2024, :469-471
[49]   Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural Networks [J].
Liu, Chuang ;
Ma, Xueqi ;
Zhan, Yibing ;
Ding, Liang ;
Tao, Dapeng ;
Du, Bo ;
Hu, Wenbin ;
Mandic, Danilo P. .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (10) :14903-14917
[50]   Two-Stage Training of Graph Neural Networks for Graph Classification [J].
Manh Tuan Do ;
Noseong Park ;
Kijung Shin .
Neural Processing Letters, 2023, 55 :2799-2823