EHG: efficient heterogeneous graph transformer for multiclass node classification

被引:0
作者
Wang, Man [1 ]
Liu, Shouqiang [1 ]
Deng, Zhen [2 ]
机构
[1] South China Normal Univ, Sch Artificial Intelligence, Foshan 528000, Peoples R China
[2] Southern Med Univ, Neurol Dept, Nanfang Hosp, Guangzhou 510000, Peoples R China
来源
ADVANCES IN CONTINUOUS AND DISCRETE MODELS | 2025年 / 2025卷 / 01期
关键词
Graph Neural Networks; Representation Learning; Graph Attention; Heterogeneous Graph; Graph Analysis; Graph Transformer; Knowledge Graph;
D O I
10.1186/s13662-025-03885-0
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Graph neural networks empowered by the Transformer's self-attention mechanism have arisen as a preferred solution for many graph classification and prediction tasks. Despite their efficacy, these networks are often hampered by their quadratic computational complexity and large model size, which pose significant challenges during graph training and inference. In this study, we present an innovative approach to heterogeneous graph transformation that adeptly navigates these limitations by capturing the rich diversity and semantic depth of graphs with various node and edge types. Our method, which streamlines the key-value interaction to a straightforward linear layer operation, maintains the same level of ranking accuracy while significantly reducing computational overhead and accelerating model training. We introduce the "EHG" model, a testament to our approach's efficacy, showcasing remarkable performance in multiclass node classification on heterogeneous graphs. Our model's evaluation on the DBLP, ACM, OGBN-MAG, and OAG datasets reveals its superiority over existing heterogeneous graph models under identical hyperparameter configurations. Notably, our model achieves a reduction of approximately 25% in parameter count and nearly 20% savings in training time compared to the leading heterogeneous graph-transformer models.
引用
收藏
页数:18
相关论文
共 27 条
[1]  
Chen J., 2018, ICLR POSTER, V1801
[2]  
Defferrard M, 2016, ADV NEUR IN, V29
[3]   metapath2vec: Scalable Representation Learning for Heterogeneous Networks [J].
Dong, Yuxiao ;
Chawla, Nitesh V. ;
Swami, Ananthram .
KDD'17: PROCEEDINGS OF THE 23RD ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2017, :135-144
[4]  
Dwivedi VP, 2021, Arxiv, DOI [arXiv:2012.09699, 10.48550/arXiv.2012.09699, DOI 10.48550/ARXIV.2012.09699]
[5]  
Hamilton WL, 2017, ADV NEUR IN, V30
[6]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[7]  
Hendrycks D., 2017, INT C LEARNING REPRE
[8]   Efficient Heterogeneous Graph Learning via Random Projection [J].
Hu, Jun ;
Hooi, Bryan ;
He, Bingsheng .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (12) :8093-8107
[9]  
Hu Weihua, 2020, Advances in Neural Information Processing Systems, V33
[10]   Heterogeneous Graph Transformer [J].
Hu, Ziniu ;
Dong, Yuxiao ;
Wang, Kuansan ;
Sun, Yizhou .
WEB CONFERENCE 2020: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2020), 2020, :2704-2710