Path-aware multi-hop graph towards improving graph learning

被引:5
作者
Duan, Rui [1 ,2 ]
Yan, Chungang [1 ,2 ]
Wang, Junli [1 ,2 ]
Jiang, Changjun [1 ,2 ]
机构
[1] Tongji Univ, Dept Comp Sci & Technol, Shanghai 201804, Peoples R China
[2] Minist Educ, Key Lab Embedded Syst & ServiceComp, Shanghai 201804, Peoples R China
关键词
Graph neural networks; Multi-path multi-hop graphs; Hop-level attention;
D O I
10.1016/j.neucom.2022.04.085
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph Neural Networks (GNNs) have achieved state-of-the-art performance in graph-related tasks. Most of them pass messages between direct neighbors and the deeper GNNs can theoretically capture the more global neighborhood information. However, they often suffer from over-smoothing problems when the GNNs' depth deepens. To eliminate this limitation, we propose a path-aware multi-hop graph framework with hop-level attention (HAPG), which has a larger receptive field than GNNs with the same depth. HAPG generates some multi-path multi-hop graphs by converting the original graph to achieve message passing between multi-hop neighbors in a single layer. HAPG aggregates one-hop and multi-hop neighbors by stacking the original graph and the generated graphs into GNNs. The node embeddings are obtained by combining the multiple outputs of the GNNs with the help of hop-level attention. Comparative experiments with various existing GNNs are conducted on three benchmark datasets, and results show that the HAPG is an effective way for improving these models. Specifically, for semi-supervised node classification tasks, the proposed HAPG-GAT and HAPG-AGNN have achieved state-ofthe-art performance. (C) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页码:13 / 22
页数:10
相关论文
共 32 条
[1]  
Atwood J, 2016, ADV NEUR IN, V29
[2]   The principled design of large-scale recursive neural network architectures-DAG-RNNs and the protein structure prediction problem [J].
Baldi, P ;
Pollastri, G .
JOURNAL OF MACHINE LEARNING RESEARCH, 2004, 4 (04) :575-602
[3]  
Bruna J., 2014, P INT C LEARN REPR I, P1
[4]  
Chen Ming, 2020, P MACHINE LEARNING R, V119
[5]  
Dai HJ, 2016, PR MACH LEARN RES, V48
[6]   A Convolutional Encoder Model for Neural Machine Translation [J].
Gehring, Jonas ;
Auli, Michael ;
Grangier, David ;
Dauphin, Yann N. .
PROCEEDINGS OF THE 55TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2017), VOL 1, 2017, :123-135
[7]  
Gilmer J, 2017, PR MACH LEARN RES, V70
[8]  
Hamilton WL, 2017, ADV NEUR IN, V30
[9]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[10]   The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation [J].
Jegou, Simon ;
Drozdzal, Michal ;
Vazquez, David ;
Romero, Adriana ;
Bengio, Yoshua .
2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, :1175-1183