Meta-MSGAT: Meta Multi-scale Fused Graph Attention Network

被引:1
作者
Chen, Ting [1 ]
Wang, Jianming [2 ]
Sun, Yukuan [3 ]
机构
[1] Tiangong Univ, Sch Comp Sci & Technol, Tianjin, Peoples R China
[2] Tiangong Univ, Tianjin Key Lab Autonomous Intelligence Technol &, Tianjin, Peoples R China
[3] Tiangong Univ, Ctr Engn Intership & Training, Tianjin, Peoples R China
来源
2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN | 2023年
基金
中国国家自然科学基金;
关键词
multi-scale; message-passing; meta; plug-and-play;
D O I
10.1109/IJCNN54540.2023.10191411
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, graph attention networks(GAT) have received extensive attention. According to the different attention extraction methods, GAT is mainly divided into two categories: hierarchical graph attention and structural information extraction of attention. However, they cannot achieve plug-and-play effects. Recent studies have found that the excellent performance of Transformer on different tasks lies not in the multi-head attention mechanism but in the structure itself. Inspired by the above, we propose a plug-and-play graph attention convolution structure: Meta Multi-scale Fused Graph Attention Network(Meta-MSGAT). Meta-MSGAT consists of multi-scale feature extraction, feature transformation and aggregation, and message-passing. The Multi-scale feature extraction is implemented using multiple channels. Feature transformation and aggregation consist of different convolution kernels to increase the diverse representation of features. The message-passing layer consists of a mixer that does not limit the specific network layer. To verify the effectiveness of our proposed structure, we performed extensive experiments on six datasets and achieved an improvement from 0.35% to 2.76% compared to baselines. In particular, we use MLP or GAT instead of the mixer for ablation experiments, and both achieve superior performance over baselines.
引用
收藏
页数:8
相关论文
共 46 条
  • [1] [Anonymous], 2021 INT JOINT C NEU
  • [2] [Anonymous], 2022, EUROPEAN CONFERENCE, DOI DOI 10.1007/978-3-031-20056-4_6
  • [3] [Anonymous], 2018, THIRTY SECOND AAAI C
  • [4] [Anonymous], 2022, FRONTIERS NEUROROBOT, DOI DOI 10.1109/ICPEE56418.2022.10050314
  • [5] Chen P., 2021, IEEE T INTELLIGENT T
  • [6] Hierarchical graph attention networks for semi-supervised node classification
    Feng, Yixiong
    Li, Kangjie
    Gao, Yicong
    Qiu, Jian
    [J]. APPLIED INTELLIGENCE, 2020, 50 (10) : 3441 - 3451
  • [7] Gao F., 2021, PROC IEEE 8 INT C DA, P1
  • [8] Gao HY, 2019, PR MACH LEARN RES, V97
  • [9] Optimized Graph Convolution Recurrent Neural Network for Traffic Prediction
    Guo, Kan
    Hu, Yongli
    Qian, Zhen
    Liu, Hao
    Zhang, Ke
    Sun, Yanfeng
    Gao, Junbin
    Yin, Baocai
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 22 (02) : 1138 - 1149
  • [10] Hafidi H., 2020, ABS200708025 ARXIV