Graph Clustering with High-Order Contrastive Learning

被引:1
作者
Li, Wang [1 ]
Zhu, En [1 ]
Wang, Siwei [1 ]
Guo, Xifeng [2 ]
机构
[1] Natl Univ Def Technol, Sch Comp Sci & Technol, Changsha 410000, Peoples R China
[2] Dongguan Univ Technol, Sch Cyberspace Sci, Dongguan 523808, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
graph clustering; unsupervised learning; contrastive learning; augmentation;
D O I
10.3390/e25101432
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Graph clustering is a fundamental and challenging task in unsupervised learning. It has achieved great progress due to contrastive learning. However, we find that there are two problems that need to be addressed: (1) The augmentations in most graph contrastive clustering methods are manual, which can result in semantic drift. (2) Contrastive learning is usually implemented on the feature level, ignoring the structure level, which can lead to sub-optimal performance. In this work, we propose a method termed Graph Clustering with High-Order Contrastive Learning (GCHCL) to solve these problems. First, we construct two views by Laplacian smoothing raw features with different normalizations and design a structure alignment loss to force these two views to be mapped into the same space. Second, we build a contrastive similarity matrix with two structure-based similarity matrices and force it to align with an identity matrix. In this way, our designed contrastive learning encompasses a larger neighborhood, enabling our model to learn clustering-friendly embeddings without the need for an extra clustering module. In addition, our model can be trained on a large dataset. Extensive experiments on five datasets validate the effectiveness of our model. For example, compared to the second-best baselines on four small and medium datasets, our model achieved an average improvement of 3% in accuracy. For the largest dataset, our model achieved an accuracy score of 81.92%, whereas the compared baselines encountered out-of-memory issues.
引用
收藏
页数:16
相关论文
共 37 条
  • [1] Bastings J., 2017, P 2017 C EMP METH NA, P1957
  • [2] Battaglia PW, 2016, ADV NEUR IN, V29
  • [3] Beck D, 2018, PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1, P273
  • [4] Graph Barlow Twins: A self-supervised representation learning framework for graphs
    Bielak, Piotr
    Kajdanowicz, Tomasz
    Chawla, Nitesh V.
    [J]. KNOWLEDGE-BASED SYSTEMS, 2022, 256
  • [5] Structural Deep Clustering Network
    Bo, Deyu
    Wang, Xiao
    Shi, Chuan
    Zhu, Meiqi
    Lu, Emiao
    Cui, Peng
    [J]. WEB CONFERENCE 2020: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2020), 2020, : 1400 - 1410
  • [6] CHINCHOR N, 1992, FOURTH MESSAGE UNDERSTANDING CONFERENCE (MUC-4), P22
  • [7] Adaptive Graph Encoder for Attributed Graph Embedding
    Cui, Ganqu
    Zhou, Jie
    Yang, Cheng
    Liu, Zhiyuan
    [J]. KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 976 - 985
  • [8] Hamilton WL, 2017, ADV NEUR IN, V30
  • [9] Hassani K, 2020, PR MACH LEARN RES, V119
  • [10] Reducing the dimensionality of data with neural networks
    Hinton, G. E.
    Salakhutdinov, R. R.
    [J]. SCIENCE, 2006, 313 (5786) : 504 - 507