High-frequency and low-frequency dual-channel graph attention network

被引:0
作者
Sun, Yukuan [1 ,2 ]
Duan, Yutai [3 ]
Ma, Haoran [4 ]
Li, Yuelong [4 ]
Wang, Jianming [4 ,5 ]
机构
[1] Tiangong Univ, Ctr Engn Internship & Training, Tianjin 300387, Peoples R China
[2] Ajou Univ, Dept AI Convergence Network, Suwon 16499, South Korea
[3] Nankai Univ, Coll Artificial Intelligence, Tianjin 300350, Peoples R China
[4] Tiangong Univ, Sch Comp Sci & Technol, 399 BinShuiXi Rd, Tianjin 300387, Peoples R China
[5] Tiangong Univ, Tianjin Key Lab Autonomous Intelligence Technol &, Tianjin 300387, Peoples R China
基金
中国国家自然科学基金;
关键词
Graph convolution; Graph attention; High-frequency information; Heterophilic graphs;
D O I
10.1016/j.patcog.2024.110795
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most existing graph convolution layers use learnable or fixed weights to sum up neighbor features to aggregate neighbor information. Since the attention values are always positive, these graph convolution layers perform as low-pass filters, which may result in their poor performance on heterophilic graphs. In this paper, two graph convolutional layers are proposed, HLGAT and NGAT. NGAT is a convolution network using only negative attention values, which only make the aggregation of high-frequency information of neighbor nodes. HLGAT makes the aggregation of low-frequency and high-frequency information by two channels, respectively, and fuses two outputs by using a learnable way. On node-classification task, both NGAT and HLGAT offer significant performance improvement compared to existing methods. The results clearly show that: (1) High-frequency information of neighborhoods plays a decisive role in heterophilic graphs. (2) The aggregation of low-frequency and high-frequency information of neighbor nodes can significantly improve the performance on heterophilic graphs.
引用
收藏
页数:11
相关论文
共 42 条
  • [1] Atwood J., 2016, ADV NEURAL INFORM PR, P1993, DOI DOI 10.5555/3157096.3157320
  • [2] Bianchi FM, 2020, PR MACH LEARN RES, V119
  • [3] Bo DY, 2021, AAAI CONF ARTIF INTE, V35, P3950
  • [4] State-of-the-Art in Visual Attention Modeling
    Borji, Ali
    Itti, Laurent
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (01) : 185 - 207
  • [5] Bruna J., 2014, INT C LEARNING REPRE
  • [6] Multivariate, Multi-frequency and Multimodal: Rethinking Graph Neural Networks for Emotion Recognition in Conversation
    Chen, Feiyu
    Shao, Jie
    Zhu, Shuyuan
    Shen, Heng Tao
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 10761 - 10770
  • [7] Graph convolutional network with structure pooling and joint-wise channel attention for action recognition
    Chen, Yuxin
    Ma, Gaoqun
    Yuan, Chunfeng
    Li, Bing
    Zhang, Hui
    Wang, Fangshi
    Hu, Weiming
    [J]. PATTERN RECOGNITION, 2020, 103
  • [8] Adaptive Graph Encoder for Attributed Graph Embedding
    Cui, Ganqu
    Zhou, Jie
    Yang, Cheng
    Liu, Zhiyuan
    [J]. KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 976 - 985
  • [9] Defferrard M, 2016, ADV NEUR IN, V29
  • [10] Structured self-attention architecture for graph-level representation learning
    Fan, Xiaolong
    Gong, Maoguo
    Xie, Yu
    Jiang, Fenlong
    Li, Hao
    [J]. PATTERN RECOGNITION, 2020, 100