GSD-GNN: Generalizable and Scalable Algorithms for Decoupled Graph Neural Networks

被引:3
作者
Yu, Yunfeng [1 ]
Lin, Longlong [1 ]
Liu, Qiyu [1 ]
Wang, Zeli [2 ]
Ou, Xi [1 ]
Jia, Tao [1 ]
机构
[1] Southwest Univ, Coll Comp & Informat Sci, Chongqing, Peoples R China
[2] Chongqing Univ Posts & Telecommun, Chongqing, Peoples R China
来源
PROCEEDINGS OF THE 4TH ANNUAL ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2024 | 2024年
关键词
Graph Neural Networks; Decoupled Graph Neural Networks; Largescale; Spectral graph theory; SPARSIFICATION;
D O I
10.1145/3652583.3658051
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph Neural Networks (GNNs) have achieved remarkable performance in various applications, including social media analysis, computer vision, and natural language processing. Decoupled GNNs are a ubiquitous framework because of their high efficiency. However, existing decoupled GNNs suffer from the following several defects. (1) Their studies on GNN feature propagation are isolated, with each study emphasizing a user-specified propagation matrix. (2) They still have high computation costs to achieve provable performance on massive graphs with millions of nodes and billions of edges. (3) Their feature propagation steps are uniform, which makes it difficult for them to escape the dilemmas of over-smoothing. In this paper, we propose GSD-GNN, a Generalized and Scalable Decoupled GNN framework based on the spectral graph theory, which offers the following advantages. Firstly, through minor parameter adjustments, it can degenerate into most existing Decoupled GNNs, such as APPNP, GDC, SGC, etc. Secondly, it efficiently computes an arbitrary propagation matrix with near-linear time complexity and theoretical guarantees. Thirdly, it customizes the adaptive feature propagation mechanism for each node to mitigate the over-smoothing dilemma. Finally, extensive experiments on massive graphs demonstrate that the proposed GSD-GNN indeed is effective, scalable, and flexible.
引用
收藏
页码:64 / 72
页数:9
相关论文
共 38 条
[1]  
Andersen R, 2006, ANN IEEE SYMP FOUND, P475
[2]   Scaling Graph Neural Networks with Approximate PageRank [J].
Bojchevski, Aleksandar ;
Klicpera, Johannes ;
Perozzi, Bryan ;
Kapoor, Amol ;
Blais, Martin ;
Rozemberczki, Benedek ;
Lukasik, Michal ;
Guennemann, Stephan .
KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, :2464-2473
[3]  
Chen JF, 2018, Arxiv, DOI [arXiv:1710.10568, DOI 10.48550/ARXIV.1710.10568]
[4]  
Chen J, 2018, Arxiv, DOI arXiv:1801.10247
[5]  
Chen M, 2020, ADV NEUR IN, V33
[6]  
Cheng Dehua, 2015, Spectral Sparsification of Random-Walk Matrix Polynomials
[7]   Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks [J].
Chiang, Wei-Lin ;
Liu, Xuanqing ;
Si, Si ;
Li, Yang ;
Bengio, Samy ;
Hsieh, Cho-Jui .
KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, :257-266
[8]  
Hamilton WL, 2017, ADV NEUR IN, V30
[9]   LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation [J].
He, Xiangnan ;
Deng, Kuan ;
Wang, Xiang ;
Li, Yan ;
Zhang, Yongdong ;
Wang, Meng .
PROCEEDINGS OF THE 43RD INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '20), 2020, :639-648
[10]  
Huang Keke, 2023, Node-Wise Diffusion for Scalable Graph Learning