CC-GNN: A Clustering Contrastive Learning Network for Graph Semi-Supervised Learning

被引:3
作者
Qin, Peng [1 ,2 ]
Chen, Weifu [3 ]
Zhang, Min [1 ]
Li, Defang [4 ]
Feng, Guocan [1 ,2 ]
机构
[1] Sun Yat Sen Univ, Sch Math, Guangzhou 510275, Peoples R China
[2] Sun Yat sen Univ, Guangdong Prov Key Lab, Guangzhou 510275, Peoples R China
[3] Guangzhou Maritime Univ, Coll Informat & Telecommun Engn, Guangzhou 510725, Peoples R China
[4] Guangzhou Vocat Coll Technol & Business, Guangzhou 511442, Peoples R China
基金
中国国家自然科学基金;
关键词
Data augmentation; Graph neural networks; Clustering algorithms; Data models; Semisupervised learning; Analytical models; Task analysis; Clustering contrastive learning; graph data augmentation; graph neural networks; semi-supervised graph learning;
D O I
10.1109/ACCESS.2024.3398356
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In graph modeling, scarcity of labeled data is a challenging issue. To address this issue, state-of-the-art graph models learn the representation of graph data via contrastive learning. Those models usually use data augmentation techniques to generate positive pairs for contrastive learning, which aims to maximize the similarity of positive data pairs while minimizing the similarity of negative data pairs. However, samples with the same labels may be separately mapped in the feature space. To solve this problem, we introduce a novel model called Clustering Contrastive Graph Neural Network (CC-GNN), which develops a new kind of grouped contrastive learning that maximizes the similarity of positive data groups and minimizes the similarity of negative groups. That is, contrastive learning is defined on a group level rather than on an instant level. We assert that parameters learned by this kind of contrastive learning will lead to better performance of graph neural networks for downstream classification tasks. We combined the clustering contrastive learning technique with three baseline GNN models for graph classification. We found that the performance of these models was significantly improved, which strongly supports our assertion. We also testified the models for node classification on three popular citation networks. Finally, we conducted an ablation study to analyze how the clustering contrastive learning influence the performance of a graph model.
引用
收藏
页码:71956 / 71969
页数:14
相关论文
共 47 条
[1]  
Bacciu D, 2018, PR MACH LEARN RES, V80
[2]  
Bachman P, 2019, ADV NEUR IN, V32
[3]  
Bresson X, 2018, Arxiv, DOI arXiv:1711.07553
[4]  
Bruna J, 2014, Arxiv, DOI [arXiv:1312.6203, DOI 10.48550/ARXIV.1312.6203]
[5]   Deep Clustering for Unsupervised Learning of Visual Features [J].
Caron, Mathilde ;
Bojanowski, Piotr ;
Joulin, Armand ;
Douze, Matthijs .
COMPUTER VISION - ECCV 2018, PT XIV, 2018, 11218 :139-156
[6]  
Chapelle O., 2006, SEMISUPERVISED LEARN, V20, P542, DOI [10.1109/TNN.2009.2015974, DOI 10.1109/TNN.2009.2015974]
[7]  
Chen T, 2020, PR MACH LEARN RES, V119
[8]   Parametric Contrastive Learning [J].
Cui, Jiequan ;
Zhong, Zhisheng ;
Liu, Shu ;
Yu, Bei ;
Jia, Jiaya .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :695-704
[9]  
Defferrard M, 2016, ADV NEUR IN, V29
[10]  
Dwivedi VP, 2022, J MACH LEARN RES, V23