Understanding Attention and Generalization in Graph Neural Networks

被引:0
|
作者
Knyazev, Boris [1 ]
Taylor, Graham W. [2 ]
Amer, Mohamed R. [3 ,4 ]
机构
[1] Univ Guelph, Vector Inst, Guelph, ON, Canada
[2] Univ Guelph, Vector Inst, Canada CIFAR AI Chair, Guelph, ON, Canada
[3] Robust AI, Palo Alto, CA USA
[4] SRI Int, Menlo Pk, CA USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019) | 2019年 / 32卷
基金
加拿大创新基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We aim to better understand attention over nodes in graph neural networks (GNNs) and identify factors influencing its effectiveness. We particularly focus on the ability of attention GNNs to generalize to larger, more complex or noisy graphs. Motivated by insights from the work on Graph Isomorphism Networks, we design simple graph reasoning tasks that allow us to study attention in a controlled environment. We find that under typical conditions the effect of attention is negligible or even harmful, but under certain conditions it provides an exceptional gain in performance of more than 60% in some of our classification tasks. Satisfying these conditions in practice is challenging and often requires optimal initialization or supervised training of attention. We propose an alternative recipe and train attention in a weakly-supervised fashion that approaches the performance of supervised models, and, compared to unsupervised models, improves results on several synthetic as well as real datasets. Source code and datasets are available at https://github.com/bknyaz/graph-attention-pool.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Generalization and Representational Limits of Graph Neural Networks
    Garg, Vikas K.
    Jegelka, Stefanie
    Jaakkola, Tommi
    25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019), 2019,
  • [2] A Generalization of Recurrent Neural Networks for Graph Embedding
    Han, Xiao
    Zhang, Chunhong
    Guo, Chenchen
    Ji, Yang
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2018, PT II, 2018, 10938 : 247 - 259
  • [3] Generalization and Representational Limits of Graph Neural Networks
    Garg, Vikas K.
    Jegelka, Stefanie
    Jaakkola, Tommi
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [4] Stability and Generalization of Graph Convolutional Neural Networks
    Verma, Saurabh
    Zhang, Zhi-Li
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 1539 - 1548
  • [5] Subgroup Generalization and Fairness of Graph Neural Networks
    Ma, Jiaqi
    Deng, Junwei
    Mei, Qiaozhu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [6] SEA: Graph Shell Attention in Graph Neural Networks
    Frey, Christian M. M.
    Ma, Yunpu
    Schubert, Matthias
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT II, 2023, 13714 : 326 - 343
  • [7] Understanding Dropout for Graph Neural Networks
    Shu, Juan
    Xi, Bowei
    Li, Yu
    Wu, Fan
    Kamhoua, Charles
    Ma, Jianzhu
    COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2022, WWW 2022 COMPANION, 2022, : 1128 - 1138
  • [8] Understanding Pooling in Graph Neural Networks
    Grattarola, Daniele
    Zambon, Daniele
    Bianchi, Filippo Maria
    Alippi, Cesare
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (02) : 2708 - 2718
  • [9] On the Topology Awareness and Generalization Performance of Graph Neural Networks
    Su, Junwei
    Wu, Chuan
    COMPUTER VISION - ECCV 2024, PT LXXXIV, 2025, 15142 : 73 - 89
  • [10] Graph Attention Networks for Neural Social Recommendation
    Mu, Nan
    Zha, Daren
    He, Yuanye
    Tang, Zhihao
    2019 IEEE 31ST INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2019), 2019, : 1320 - 1327