SLAPS: Self-Supervision Improves Structure Learning for Graph Neural Networks

被引:0
|
作者
Fatemi, Bahare [1 ]
El Asri, Layla [2 ]
Kazemi, Seyed Mehran [3 ]
机构
[1] Univ British Columbia, Vancouver, BC, Canada
[2] Borealis AI, Toronto, ON, Canada
[3] Google Res, Mountain View, CA USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021) | 2021年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph neural networks (GNNs) work well when the graph structure is provided. However, this structure may not always be available in real-world applications. One solution to this problem is to infer a task-specific latent structure and then apply a GNN to the inferred graph. Unfortunately, the space of possible graph structures grows super-exponentially with the number of nodes and so the task-specific supervision may be insufficient for learning both the structure and the GNN parameters. In this work, we propose the Simultaneous Learning of Adjacency and GNN Parameters with Self-supervision, or SLAPS, a method that provides more supervision for inferring a graph structure through self-supervision. A comprehensive experimental study demonstrates that SLAPS scales to large graphs with hundreds of thousands of nodes and outperforms several models that have been proposed to learn a task-specific graph structure on established benchmarks.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Emergent linguistic structure in artificial neural networks trained by self-supervision
    Manning, Christopher D.
    Clark, Kevin
    Hewitt, John
    Khandelwal, Urvashi
    Levy, Omer
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2020, 117 (48) : 30046 - 30054
  • [2] Homophily-Enhanced Self-Supervision for Graph Structure Learning: Insights and Directions
    Wu, Lirong
    Lin, Haitao
    Liu, Zihan
    Liu, Zicheng
    Huang, Yufei
    Li, Stan Z.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (09) : 12358 - 12372
  • [3] Unsupervised Graph Neural Architecture Search with Disentangled Self-supervision
    Zhang, Zeyang
    Wang, Xin
    Zhang, Ziwei
    Shen, Guangyao
    Shen, Shiqi
    Zhu, Wenwu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [4] FedGL: Federated graph learning framework with global self-supervision
    Chen, Chuan
    Xu, Ziyue
    Hu, Weibo
    Zheng, Zibin
    Zhang, Jie
    INFORMATION SCIENCES, 2024, 657
  • [5] INS-GNN: Improving graph imbalance learning with self-supervision
    Juan, Xin
    Zhou, Fengfeng
    Wang, Wentao
    Jin, Wei
    Tang, Jiliang
    Wang, Xin
    INFORMATION SCIENCES, 2023, 637
  • [6] Self-supervision meets kernel graph neural models: From architecture to augmentations
    Dan, Jiawang
    Wu, Ruofan
    Liu, Yunpeng
    Wang, Baokun
    Meng, Changhua
    Liu, Tengfei
    Zhang, Tianyi
    Wang, Ningtao
    Fu, Xing
    Li, Qi
    Wang, Weiqiang
    2023 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW 2023, 2023, : 1076 - 1083
  • [7] How Does Bayesian Noisy Self-Supervision Defend Graph Convolutional Networks?
    Jun Zhuang
    Mohammad Al Hasan
    Neural Processing Letters, 2022, 54 : 2997 - 3018
  • [8] Hyperspherically regularized networks for self-supervision
    Durrant, Aiden
    Leontidis, Georgios
    IMAGE AND VISION COMPUTING, 2022, 124
  • [9] Tailoring Self-Supervision for Supervised Learning
    Moon, WonJun
    Kim, Ji-Hwan
    Heo, Jae-Pil
    COMPUTER VISION, ECCV 2022, PT XXV, 2022, 13685 : 346 - 364
  • [10] How Does Bayesian Noisy Self-Supervision Defend Graph Convolutional Networks?
    Zhuang, Jun
    Al Hasan, Mohammad
    NEURAL PROCESSING LETTERS, 2022, 54 (04) : 2997 - 3018