CLEAR: Cluster-Enhanced Contrast for Self-Supervised Graph Representation Learning

被引:30
作者
Luo, Xiao [1 ]
Ju, Wei [2 ]
Qu, Meng [3 ]
Gu, Yiyang [2 ]
Chen, Chong [4 ]
Deng, Minghua [1 ]
Hua, Xian-Sheng [4 ]
Zhang, Ming [2 ]
机构
[1] Peking Univ, Sch Math Sci, Beijing 100871, Peoples R China
[2] Peking Univ, Sch Comp Sci, Beijing 100871, Peoples R China
[3] Univ Montreal, Mila Quebec AI Inst, Montreal, PQ H3T 1J4, Canada
[4] Alibaba Grp, Discovery Adventure Momentum & Outlook DAMO Acad, Hangzhou 311100, Peoples R China
基金
中国国家自然科学基金;
关键词
Contrastive learning (CL); graph clustering; graph representation learning; self-supervised learning; PREDICTION; NETWORK; CUTS;
D O I
10.1109/TNNLS.2022.3177775
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This article studies self-supervised graph representation learning, which is critical to various tasks, such as protein property prediction. Existing methods typically aggregate representations of each individual node as graph representations, but fail to comprehensively explore local substructures (i.e., motifs and subgraphs), which also play important roles in many graph mining tasks. In this article, we propose a self-supervised graph representation learning framework named cluster-enhanced Contrast (CLEAR) that models the structural semantics of a graph from graph-level and substructure-level granularities, i.e., global semantics and local semantics, respectively. Specifically, we use graph-level augmentation strategies followed by a graph neural network-based encoder to explore global semantics. As for local semantics, we first use graph clustering techniques to partition each whole graph into several subgraphs while preserving as much semantic information as possible. We further employ a self-attention interaction module to aggregate the semantics of all subgraphs into a local-view graph representation. Moreover, we integrate both global semantics and local semantics into a multiview graph contrastive learning framework, enhancing the semantic-discriminative ability of graph representations. Extensive experiments on various real-world benchmarks demonstrate the efficacy of the proposed CLEAR over current graph self-supervised representation learning approaches on both graph classification and transfer learning tasks.
引用
收藏
页码:899 / 912
页数:14
相关论文
共 75 条
  • [41] Kipf TN, 2017, Arxiv, DOI arXiv:1609.02907
  • [42] Narayanan A., 2017, ARXIV170705005
  • [43] Rendle Steffen, 2010, Proceedings 2010 10th IEEE International Conference on Data Mining (ICDM 2010), P995, DOI 10.1109/ICDM.2010.127
  • [44] Graph clustering
    Schaeffer, Satu Elisa
    [J]. COMPUTER SCIENCE REVIEW, 2007, 1 (01) : 27 - 64
  • [45] Shervashidze N., 2009, AISTATS, P488
  • [46] Shervashidze N, 2011, J MACH LEARN RES, V12, P2539
  • [47] Robust Structured Graph Clustering
    Shi, Dan
    Zhu, Lei
    Li, Yikun
    Li, Jingjing
    Nie, Xiushan
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (11) : 4424 - 4436
  • [48] Normalized cuts and image segmentation
    Shi, JB
    Malik, J
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2000, 22 (08) : 888 - 905
  • [49] Stadler M., 2021, Advances in Neural Information Processing Systems
  • [50] Sun F.-Y., 2020, ARXIV