Multi-Scale Self-Supervised Graph Contrastive Learning With Injective Node Augmentation

被引:4
作者
Zhang, Haonan [1 ]
Ren, Yuyang [1 ]
Fu, Luoyi [1 ]
Wang, Xinbing [1 ]
Chen, Guihai [1 ]
Zhou, Chenghu [2 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai 200240, Peoples R China
[2] Chinese Acad Sci, Inst Geog Sci & Nat Resources Res, Beijing 100045, Peoples R China
关键词
Graph contrastive learning; graph representation learning; node augmentation; self-supervised learning;
D O I
10.1109/TKDE.2023.3278463
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph Contrastive Learning (GCL) with Graph Neural Networks (GNN) has emerged as a promising method for learning latent node representations in a self-supervised manner. Most of existing GCL methods employ random sampling for graph view augmentation and maximize the agreement of the node representations between the views. However, the random augmentation manner, which is likely to produce very similar graph view samplings, may easily result in incomplete nodal contextual information, thus weakening the discrimination of node representations. To this end, this paper proposes a novel trainable scheme from the perspective of node augmentation, which is theoretically proved to be injective and utilizes the subgraphs consisting of each node with its neighbors to enhance the distinguishability of nodal view. Notably, our proposed scheme tries to enrich node representations via a multi-scale contrastive training that integrates three different levels of training granularity, i.e., subgraph level, graph- and node-level contextual information. In particular, the subgraph-level objective between augmented and original node views is constructed to enhance the discrimination of node representations while graph- and node-level objectives with global and local information from the original graph are developed to improve the generalization ability of representations. Experiment results demonstrate that our framework outperforms existing state-of-the-art baselines and even surpasses several supervised counterparts on four real-world datasets for node classification.
引用
收藏
页码:261 / 274
页数:14
相关论文
共 52 条
  • [1] Belghazi MI, 2018, PR MACH LEARN RES, V80
  • [2] BONACICH P, 1987, AM J SOCIOL, V92, P1170, DOI 10.1086/228631
  • [3] Chen T, 2020, PR MACH LEARN RES, V119
  • [4] Defferrard M, 2016, ADV NEUR IN, V29
  • [5] Hjelm RD, 2019, Arxiv, DOI arXiv:1808.06670
  • [6] Grill J.-B., 2020, BOOTSTRAP YOUR OWN L, V33, P21271
  • [7] node2vec: Scalable Feature Learning for Networks
    Grover, Aditya
    Leskovec, Jure
    [J]. KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, : 855 - 864
  • [8] Hamilton WL, 2017, ADV NEUR IN, V30
  • [9] Hassani K, 2020, PR MACH LEARN RES, V119
  • [10] Momentum Contrast for Unsupervised Visual Representation Learning
    He, Kaiming
    Fan, Haoqi
    Wu, Yuxin
    Xie, Saining
    Girshick, Ross
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, : 9726 - 9735