Toward Graph Self-Supervised Learning With Contrastive Adjusted Zooming

被引:8
作者
Zheng, Yizhen [1 ]
Li, Ming [1 ,4 ]
Pan, Shirui [2 ]
Li, Yuan-Fang [1 ]
Peng, Hao [3 ]
Li, Ming [1 ,4 ]
Li, Zhao
机构
[1] Monash Univ, Dept Data Sci & AI, Fac IT, Clayton, Vic 3800, Australia
[2] Griffith Univ, Sch Informat & Commun Technol, Gold Coast, Qld 4222, Australia
[3] Beihang Univ, Beijing Adv Innovat Ctr Big Data & Brain Comp, Beijing 100191, Peoples R China
[4] Zhejiang Normal Univ, Key Lab Intelligent Educ Technol & Applicat Zheji, Jinhua 321004, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Contrastive learning; graph neural networks (GNNs); graph representation learning (GRL); self-supervised learning;
D O I
10.1109/TNNLS.2022.3216630
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph representation learning (GRL) is critical for graph-structured data analysis. However, most of the existing graph neural networks (GNNs) heavily rely on labeling information, which is normally expensive to obtain in the real world. Although some existing works aim to effectively learn graph representations in an unsupervised manner, they suffer from certain limitations, such as the heavy reliance on monotone contrastiveness and limited scalability. To overcome the aforementioned problems, in light of the recent advancements in graph contrastive learning, we introduce a novel self-supervised GRL algorithm via graph contrastive adjusted zooming, namely, G-Zoom, to learn node representations by leveraging the proposed adjusted zooming scheme. Specifically, this mechanism enables G-Zoom to explore and extract self-supervision signals from a graph from multiple scales: micro (i.e., node level), meso (i.e., neighborhood level), and macro (i.e., subgraph level). First, we generate two augmented views of the input graph via two different graph augmentations. Then, we establish three different contrastiveness on the above three scales progressively, from node, neighboring, to subgraph level, where we maximize the agreement between graph representations across scales. While we can extract valuable clues from a given graph on the micro and macro perspectives, the neighboring-level contrastiveness offers G-Zoom the capability of a customizable option based on our adjusted zooming scheme to manually choose an optimal viewpoint that lies between the micro and macro perspectives to better understand the graph data. In addition, to make our model scalable to large graphs, we use a parallel graph diffusion approach to decouple model training from the graph size. We have conducted extensive experiments on real-world datasets, and the results demonstrate that our proposed model outperforms the state-of-the-art methods consistently.
引用
收藏
页码:8882 / 8896
页数:15
相关论文
共 50 条
  • [1] Scaling Graph Neural Networks with Approximate PageRank
    Bojchevski, Aleksandar
    Klicpera, Johannes
    Perozzi, Bryan
    Kapoor, Amol
    Blais, Martin
    Rozemberczki, Benedek
    Lukasik, Michal
    Guennemann, Stephan
    [J]. KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 2464 - 2473
  • [2] Bruna J., 2013, INT C LEARNING REPRE
  • [3] Caron M, 2020, ADV NEUR IN, V33
  • [4] Chen T, 2020, PR MACH LEARN RES, V119
  • [5] Dai HJ, 2016, PR MACH LEARN RES, V48
  • [6] Defferrard M, 2016, ADV NEUR IN, V29
  • [7] Graph Learning: A Survey
    Xia F.
    Sun K.
    Yu S.
    Aziz A.
    Wan L.
    Pan S.
    Liu H.
    [J]. IEEE Transactions on Artificial Intelligence, 2021, 2 (02): : 109 - 127
  • [8] Grill J.B., 2020, P ADV NEUR INF PROC, V33, P21271, DOI DOI 10.48550/ARXIV.2006.07733
  • [9] node2vec: Scalable Feature Learning for Networks
    Grover, Aditya
    Leskovec, Jure
    [J]. KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, : 855 - 864
  • [10] Hamilton WL, 2017, ADV NEUR IN, V30