GRE2-MDCL: Graph Representation Embedding Enhanced via Multidimensional Contrastive Learning

被引:0
作者
Li, Quanjun [1 ]
Li, Weixuan [1 ]
Zheng, Xiya [2 ]
Zhou, Junhua [1 ]
Zhong, Wenming [3 ]
Chen, Xuhang [4 ]
Long, Chao [1 ]
机构
[1] Guangdong Univ Technol, Sch Adv Mfg, Jieyang 522000, Peoples R China
[2] Wuyi Univ, Sch Elect & Informat Engn, Jiangmen 510006, Peoples R China
[3] Guangdong Univ Technol, Sch Comp Sci & Technol, Guangzhou 510006, Peoples R China
[4] Huizhou Univ, Sch Comp Sci & Engn, Huizhou 516007, Peoples R China
关键词
Contrastive learning; Graph neural networks; Vectors; Representation learning; Matrix decomposition; Training; Data models; Computational modeling; Buildings; Mathematical models; Graph representation; triple neural network; multi-dimensional comparative learning; node classification;
D O I
10.1109/ACCESS.2025.3553862
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Graph representation learning aims to preserve graph topology when mapping nodes to vector representations, enabling downstream tasks like node classification and community detection. However, most graph neural network models require extensive labelled data, limiting their practical applicability. To address this, researchers have explored Graph Contrastive Learning (GCL), which uses enhanced graph data and contrastive learning to better capture graph structure and features, providing new avenues for solving real-world problems with limited labelled data. Building on this, this work proposes Graph Representation Embedding Enhanced via Multidimensional Contrastive Learning (GRE(2)-MDCL). GRE2-MDCL first globally and locally augments the input graph using SVD and LAGNN. The enhanced data is then fed into a triple network with a multi-head attention GNN as the core model. Finally, GRE(2)-MDCL constructs a multidimensional contrastive loss, incorporating cross-network, cross-view, and neighbor contrast, to optimize the model. Evaluated on Cora, Citeseer, and PubMed, GRE(2)-MDCL achieves average accuracies of 83.1%, 72.6%, and 82.7%, outperforming baseline GCL models. Visualizations also show tighter intra-cluster aggregation and clearer inter-cluster boundaries, demonstrating the framework's effectiveness in improving upon the baseline.
引用
收藏
页码:61312 / 61321
页数:10
相关论文
共 31 条
[1]   HAQJS']JSK: Hierarchical-Aligned Quantum Jensen-Shannon Kernels for Graph Classification [J].
Bai, Lu ;
Cui, Lixin ;
Wang, Yue ;
Li, Ming ;
Li, Jing ;
Yu, Philip S. ;
Hancock, Edwin R. .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (11) :6370-6384
[2]  
Bing Rui, 2023, Journal of Software, P4477, DOI 10.13328/j.cnki.jos.006883
[3]  
Cai D, 2020, AAAI CONF ARTIF INTE, V34, P7464
[4]   Robust graph mutual-assistance convolutional networks for semi-supervised node classification tasks [J].
Guo, Qihang ;
Yang, Xibei ;
Guan, Wenrui ;
Ma, Kai ;
Qian, Yuhua .
INFORMATION SCIENCES, 2025, 694
[5]  
Guo Xiaojiao, 2024, MM '24: Proceedings of the 32nd ACM International Conference on Multimedia, P10173, DOI 10.1145/3664647.3680745
[6]   Underwater Image Restoration Through a Prior Guided Hybrid Sense Approach and Extensive Benchmark Analysis [J].
Guo, Xiaojiao ;
Chen, Xuhang ;
Wang, Shuqiang ;
Pun, Chi-Man .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (05) :4784-4800
[7]  
Guo XJ, 2024, Arxiv, DOI arXiv:2412.18459
[8]  
Han XT, 2022, PR MACH LEARN RES
[9]  
Hassani K, 2020, PR MACH LEARN RES, V119
[10]  
He MG, 2021, ADV NEUR IN, V34