A Lightweight Method for Graph Neural Networks Based on Knowledge Distillation and Graph Contrastive Learning

被引:1
作者
Wang, Yong [1 ]
Yang, Shuqun [1 ]
机构
[1] Shanghai Univ Engn Sci, Sch Elect & Elect Engn, Shanghai 201620, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 11期
基金
国家重点研发计划;
关键词
graph neural network; lightweight technology; knowledge distillation; graph contrastive learning;
D O I
10.3390/app14114805
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Graph neural networks (GNNs) are crucial tools for processing non-Euclidean data. However, due to scalability issues caused by the dependency and topology of graph data, deploying GNNs in practical applications is challenging. Some methods aim to address this issue by transferring GNN knowledge to MLPs through knowledge distillation. However, distilled MLPs cannot directly capture graph structure information and rely only on node features, resulting in poor performance and sensitivity to noise. To solve this problem, we propose a lightweight optimization method for GNNs that combines graph contrastive learning and variable-temperature knowledge distillation. First, we use graph contrastive learning to capture graph structural representations, enriching the input information for the MLP. Then, we transfer GNN knowledge to the MLP using variable temperature knowledge distillation. Additionally, we enhance both node content and structural features before inputting them into the MLP, thus improving its performance and stability. Extensive experiments on seven datasets show that the proposed KDGCL model outperforms baseline models in both transductive and inductive settings; in particular, the KDGCL model achieves an average improvement of 1.63% in transductive settings and 0.8% in inductive settings when compared to baseline models. Furthermore, KDGCL maintains parameter efficiency and inference speed, making it competitive in terms of performance.
引用
收藏
页数:13
相关论文
共 43 条
[1]   Binary Graph Neural Networks [J].
Bahri, Mehdi ;
Bahl, Gaetan ;
Zafeiriou, Stefanos .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :9487-9496
[2]  
Chen J., 2022, arXiv
[3]  
Chen Ming, 2020, P MACHINE LEARNING R, V119
[4]  
Chen Rongqin, 2022, Advances in Neural Information Processing Systems
[5]  
Chen YZ, 2021, Arxiv, DOI arXiv:2011.02255
[6]  
Deng X, 2021, Arxiv, DOI arXiv:2105.07519
[7]   FreeKD: Free-direction Knowledge Distillation for Graph Neural Networks [J].
Feng, Kaituo ;
Li, Changsheng ;
Yuan, Ye ;
Wang, Guoren .
PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, :357-366
[8]  
Gilmer J, 2017, PR MACH LEARN RES, V70
[9]   node2vec: Scalable Feature Learning for Networks [J].
Grover, Aditya ;
Leskovec, Jure .
KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, :855-864
[10]  
Hamilton WL, 2017, ADV NEUR IN, V30