PHGNN: Pre-Training Heterogeneous Graph Neural Networks

被引:0
作者
Li, Xin [1 ]
Wei, Hao [2 ]
Ding, Yu [3 ]
机构
[1] Ningbo Univ Finance & Econ, Coll Finance & Informat, Ningbo 315175, Peoples R China
[2] Natl Key Lab Sci & Technol Bind Signal Proc, Chengdu 610000, Peoples R China
[3] Nanjing Agr Univ, Coll Artificial Intelligence, Nanjing 210095, Peoples R China
关键词
Task analysis; Graph neural networks; Semantics; Training; Feature extraction; Aggregates; Vectors; heterogeneous graph; pre-training;
D O I
10.1109/ACCESS.2024.3409429
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Graph neural networks(GNNs) in heterogeneous graph has shown superior performance and attracted considerable research interest. However, many applications require GNNs to make predictions on test examples that are distributionally different from training ones, while task-specific labeled data is often arduously expensive to obtain. An effective approach to this challenge is to pre-train an expressive GNN model on unlabeled data, and then fine-tune it on a downstream task of interest. While pre-training has been demonstrated effectively in homogeneous graph, it remains an open question to pre-train a GNN in heterogeneous graph as it contains different types of nodes and edges, which leads to new challenges on structure heterogeneity for graph pre-training. To capture the structural and semantic properties of heterogeneous graphs simultaneously, in this paper, we develop a new strategy for Pre-training Heterogeneous Graph Neural Networks(PHGNN). The key to the success of PHGNN is that PHGNN innovatively proposed to use two different tasks to capture two kinds of similarities in heterogeneous graph: the similarities between nodes with the same type and the similarities between nodes with the different type. In addition, PHGNN proposed an attribute type prediction task to preserve node attributes information. We systematically study pre-training on two real-world heterogeneous graphs. The results demonstrate that PHGNN improves generalization significantly across downstream tasks.
引用
收藏
页码:135411 / 135418
页数:8
相关论文
共 50 条
[31]   Evaluating Content-based Pre-Training Strategies for a Knowledge-aware Recommender System based on Graph Neural Networks [J].
Spillo, Giuseppe ;
Bottalico, Francesco ;
Musto, Cataldo ;
de Gemmis, Marco ;
Lops, Pasquale ;
Semeraro, Giovanni .
PROCEEDINGS OF THE 32ND ACM CONFERENCE ON USER MODELING, ADAPTATION AND PERSONALIZATION, UMAP 2024, 2024, :165-171
[32]   Hybrid Pre-Training Strategy for Deep Denoising Neural Networks and Its Application in Machine Fault Diagnosis [J].
Zhao, Baoxuan ;
Cheng, Changming ;
Peng, Zhike ;
He, Qingbo ;
Meng, Guang .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2021, 70
[33]   Semisupervised Graph Neural Networks for Graph Classification [J].
Xie, Yu ;
Liang, Yanfeng ;
Gong, Maoguo ;
Qin, A. K. ;
Ong, Yew-Soon ;
He, Tiantian .
IEEE TRANSACTIONS ON CYBERNETICS, 2023, 53 (10) :6222-6235
[34]   Neural Networks for Sequential Data: a Pre-training Approach based on Hidden Markov Models [J].
Pasa, Luca ;
Testolin, Alberto ;
Sperduti, Alessandro .
NEUROCOMPUTING, 2015, 169 :323-333
[35]   VOICE CONVERSION USING DEEP NEURAL NETWORKS WITH SPEAKER-INDEPENDENT PRE-TRAINING [J].
Mohammadi, Seyed Hamidreza ;
Kain, Alexander .
2014 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY SLT 2014, 2014, :19-23
[36]   Mask and Reason: Pre-Training Knowledge Graph Transformers for Complex Logical Queries [J].
Liu, Xiao ;
Zhao, Shiyu ;
Su, Kai ;
Cen, Yukuo ;
Qiu, Jiezhong ;
Zhang, Mengdi ;
Wu, Wei ;
Dong, Yuxiao ;
Tang, Jie .
PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, :1120-1130
[37]   Dynamic Pre-training of Deep Recurrent Neural Networks for Predicting Environmental Monitoring Data [J].
Ong, Bun Theang ;
Sugiura, Komei ;
Zettsu, Koji .
2014 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2014, :760-765
[38]   Cognize Yourself: Graph Pre-Training via Core Graph Cognizing and Differentiating [J].
Yu, Tao ;
Fu, Yao ;
Hu, Linghui ;
Wang, Huizhao ;
Jiang, Weihao ;
Pu, Shiliang .
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, :2413-2422
[39]   GUIDE: Training Deep Graph Neural Networks via Guided Dropout Over Edges [J].
Wang, Jie ;
Liang, Jianqing ;
Liang, Jiye ;
Yao, Kaixuan .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (04) :4465-4477
[40]   Omni-Training: Bridging Pre-Training and Meta-Training for Few-Shot Learning [J].
Shu, Yang ;
Cao, Zhangjie ;
Gao, Jinghan ;
Wang, Jianmin ;
Yu, Philip S. ;
Long, Mingsheng .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (12) :15275-15291