Exploiting Neighbor Effect: Conv-Agnostic GNN Framework for Graphs With Heterophily

被引:13
作者
Chen, Jie [1 ,2 ]
Chen, Shouzhen [1 ,2 ]
Gao, Junbin [3 ]
Huang, Zengfeng [4 ]
Zhang, Junping [1 ,2 ]
Pu, Jian [5 ]
机构
[1] Fudan Univ, Shanghai Key Lab Intelligent Informat Proc, Shanghai, Peoples R China
[2] Fudan Univ, Sch Comp Sci, Shanghai 200433, Peoples R China
[3] Univ Sydney, Univ Sydney Business Sch, Discipline Business Analyt, Camperdown, NSW 2006, Australia
[4] Fudan Univ, Sch Data Sci, Shanghai 200433, Peoples R China
[5] Fudan Univ, Inst Sci & Technol Brain Inspired Intelligence, Shanghai 200433, Peoples R China
基金
中国国家自然科学基金;
关键词
Measurement; Graph neural networks; Task analysis; Robustness; Convolution; Mixers; Learning systems; Graph neural networks (GNNs); heterophily; homophily; node classification; representation learning; NETWORKS;
D O I
10.1109/TNNLS.2023.3267902
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Due to the homophily assumption in graph convolution networks (GCNs), a common consensus in the graph node classification task is that graph neural networks (GNNs) perform well on homophilic graphs but may fail on heterophilic graphs with many interclass edges. However, the previous interclass edges' perspective and related homo-ratio metrics cannot well explain the GNNs' performance under some heterophilic datasets, which implies that not all the interclass edges are harmful to GNNs. In this work, we propose a new metric based on the von Neumann entropy to reexamine the heterophily problem of GNNs and investigate the feature aggregation of interclass edges from an entire neighbor identifiable perspective. Moreover, we propose a simple yet effective Conv-Agnostic GNN framework (CAGNNs) to enhance the performance of most GNNs on the heterophily datasets by learning the neighbor effect for each node. Specifically, we first decouple the feature of each node into the discriminative feature for downstream tasks and the aggregation feature for graph convolution (GC). Then, we propose a shared mixer module to adaptively evaluate the neighbor effect of each node to incorporate the neighbor information. The proposed framework can be regarded as a plug-in component and is compatible with most GNNs. The experimental results over nine well-known benchmark datasets indicate that our framework can significantly improve performance, especially for the heterophily graphs. The average performance gain is 9.81%, 25.81%, and 20.61% compared with graph isomorphism network (GIN), graph attention network (GAT), and GCN, respectively. Extensive ablation studies and robustness analysis further verify the effectiveness, robustness, and interpretability of our framework. Code is available at https://github.com/JC-202/CAGNN.
引用
收藏
页码:13383 / 13396
页数:14
相关论文
共 54 条
  • [1] Abu-El-Haija S, 2019, ICML, P21
  • [2] [Anonymous], 2019, IJCAI
  • [3] Ba J.L., 2016, arXiv
  • [4] Explaining Deep Graph Networks via Input Perturbation
    Bacciu, Davide
    Numeroso, Danilo
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (12) : 10334 - 10345
  • [5] A gentle introduction to deep learning for graphs
    Bacciu, Davide
    Errica, Federico
    Micheli, Alessio
    Podda, Marco
    [J]. NEURAL NETWORKS, 2020, 129 : 203 - 221
  • [6] Bengtsson Ingemar, 2017, Geometry of Quantum States: an Introduction to Quantum Entanglement
  • [7] Bo DY, 2021, AAAI CONF ARTIF INTE, V35, P3950
  • [8] Bruna J, 2014, Arxiv, DOI [arXiv:1312.6203, DOI 10.48550/ARXIV.1312.6203]
  • [9] MEMORY-BASED MESSAGE PASSING: DECOUPLING THE MESSAGE FOR PROPAGATION FROM DISCRIMINATION
    Chen, Jie
    Liu, Weiqi
    Pu, Jian
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 4033 - 4037
  • [10] Graph Decoupling Attention Markov Networks for Semisupervised Graph Node Classification
    Chen, Jie
    Chen, Shouzhen
    Bai, Mingyuan
    Pu, Jian
    Zhang, Junping
    Gao, Junbin
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (12) : 9859 - 9873