GMCL: Graph Mask Contrastive Learning for Self-Supervised Graph Representation Learning

被引:0
作者
Xu, Long [1 ]
Pan, Zhiqiang [1 ]
Chen, Honghui [1 ]
Zhou, Tianjian [2 ]
机构
[1] Natl Univ Def Technol, Natl Key Lab Informat Syst Engn, Changsha, Peoples R China
[2] Natl Univ Def Technol, Coll Syst Engn, Changsha, Peoples R China
来源
2024 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN 2024 | 2024年
关键词
Graph Neural Networks; Self-Supervised Learning; Graph Representation Learning;
D O I
10.1109/IJCNN60899.2024.10650219
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph self-supervised learning is a task replete with potential. Previously, mainstream approaches in graph self-supervised learning were based on contrastive or generative tasks to extract graph embeddings, achieving commendable performance in downstream tasks such as node classification. However, these two methods have their respective strengths and limitations: (i) contrastive graph learning excels at capturing discriminative features of the graph but is more prone to overlooking structural representations of the graph itself; (ii) generative methods prioritize learning reconstructive features but struggle with large graphs and are susceptible to overfitting. Consequently, several simplistic fusion methods have been proposed, integrating encoded features from both approaches through concatenation, addition, or attention mechanisms for downstream tasks. However, these methods tend to be overly coarse, neglecting some unique information present in both categories of methods. To enhance the synergy of these two methods, we propose an edge perturbation data augmentation method to prevent the generation of spurious positive samples and a feature imputation decoder to complement contrastive features. We assess the model's performance on two downstream tasks in graph representation learning, and experimental results demonstrate that our proposed method outperforms baseline methods on ten publicly available datasets.
引用
收藏
页数:8
相关论文
共 41 条
[1]  
[Anonymous], 2018, NEURIPS 2018
[2]  
[Anonymous], KDD15 P 21 ACM
[3]  
[Anonymous], 2018, NEURIPS 2018
[4]  
Ba Lei Jimmy, 2016, arXiv
[5]   LIBSVM: A Library for Support Vector Machines [J].
Chang, Chih-Chung ;
Lin, Chih-Jen .
ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2011, 2 (03)
[6]   Inducing metallicity in graphene nanoribbons via zero-mode superlattices [J].
Rizzo, Daniel J. ;
Veber, Gregory ;
Jiang, Jingwei ;
McCurdy, Ryan ;
Cao, Ting ;
Bronner, Christopher ;
Chen, Ting ;
Louie, Steven G. ;
Fischer, Felix R. ;
Crommie, Michael F. .
SCIENCE, 2020, 369 (6511) :1597-+
[7]   Self-Supervised Representation Learning: Introduction, advances, and challenges [J].
Ericsson, Linus ;
Gouk, Henry ;
Loy, Chen Change ;
Hospedales, Timothy M. .
IEEE SIGNAL PROCESSING MAGAZINE, 2022, 39 (03) :42-62
[8]  
Hassani K, 2020, PR MACH LEARN RES, V119
[9]   Masked Autoencoders Are Scalable Vision Learners [J].
He, Kaiming ;
Chen, Xinlei ;
Xie, Saining ;
Li, Yanghao ;
Dollar, Piotr ;
Girshick, Ross .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :15979-15988
[10]   Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1026-1034