Generation-based Multi-view Contrast for Self-supervised Graph Representation Learning

被引:0
作者
Han, Yuehui [1 ]
机构
[1] Nanjing Univ Sci & Technol, Xiaolingwei St, Nanjing 210000, Jiangsu, Peoples R China
关键词
Graph representation learning; contrastive learning; multi-view generation;
D O I
10.1145/3645095
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Graph contrastive learning has made remarkable achievements in the self-supervised representation learning of graph-structured data. By employing perturbation function (i.e., perturbation on the nodes or edges of graph), most graph contrastive learning methods construct contrastive samples on the original graph. However, the perturbation-based data augmentation methods randomly change the inherent information (e.g., attributes or structures) of the graph. Therefore, after nodes embedding on the perturbed graph, we cannot guarantee the validity of the contrastive samples as well as the learned performance of graph contrastive learning. To this end, in this article, we propose a novel generation-based multi-view contrastive learning framework (GMVC) for self-supervised graph representation learning, which generates the contrastive samples based on our generator rather than perturbation function. Specifically, after nodes embedding on the original graphwe first employ random walk in the neighborhood to developmultiple relevant node sequences for each anchor node. We then utilize the transformer to generate the representations of relevant contrastive samples of anchor node based on the features and structures of the sampled node sequences. Finally, by maximizing the consistency between the anchor view and the generated views, we force the model to effectively encode graph information into nodes embeddings. We perform extensive experiments of node classification and link prediction tasks on eight benchmark datasets, which verify the effectiveness of our generation-based multi-view graph contrastive learning method.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] Self-Supervised Graph Representation Learning via Topology Transformations
    Gao, Xiang
    Hu, Wei
    Qi, Guo-Jun
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (04) : 4202 - 4215
  • [22] Dual-channel graph contrastive learning for self-supervised graph-level representation learning
    Luo, Zhenfei
    Dong, Yixiang
    Zheng, Qinghua
    Liu, Huan
    Luo, Minnan
    PATTERN RECOGNITION, 2023, 139
  • [23] Self-Supervised pre-training model based on Multi-view for MOOC Recommendation
    Tian, Runyu
    Cai, Juanjuan
    Li, Chuanzhen
    Wang, Jingling
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 252
  • [24] Exploring Self-Supervised Multi-view Contrastive Learning for Speech Emotion Recognition with Limited Annotations
    Khaertdinov, Bulat
    Jeuris, Pedro
    Sousa, Annanda
    Hortal, Enrique
    INTERSPEECH 2024, 2024, : 4708 - 4712
  • [25] Evidential Self-Supervised Graph Representation Learning via Prototype-based Consistency
    Ju, Wei
    Yi, Siyu
    Zhang, Ming
    PROCEEDINGS OF THE ACM TURING AWARD CELEBRATION CONFERENCE-CHINA 2024, ACM-TURC 2024, 2024, : 210 - 211
  • [26] Multi-view representation model based on graph autoencoder
    Li, Jingci
    Lu, Guangquan
    Wu, Zhengtian
    Ling, Fuqing
    INFORMATION SCIENCES, 2023, 632 : 439 - 453
  • [27] Dense lead contrast for self-supervised representation learning of multilead electrocardiograms
    Liu, Wenhan
    Li, Zhoutong
    Zhang, Huaicheng
    Chang, Sheng
    Wang, Hao
    He, Jin
    Huang, Qijun
    INFORMATION SCIENCES, 2023, 634 : 189 - 205
  • [28] SMGCL: Semi-supervised Multi-view Graph Contrastive Learning
    Zhou, Hui
    Gong, Maoguo
    Wang, Shanfeng
    Gao, Yuan
    Zhao, Zhongying
    KNOWLEDGE-BASED SYSTEMS, 2023, 260
  • [29] Graph Self-Supervised Learning: A Survey
    Liu, Yixin
    Jin, Ming
    Pan, Shirui
    Zhou, Chuan
    Zheng, Yu
    Xia, Feng
    Yu, Philip S.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (06) : 5879 - 5900
  • [30] Self-supervised contrastive graph representation with node and graph augmentation?
    Duan, Haoran
    Xie, Cheng
    Li, Bin
    Tang, Peng
    NEURAL NETWORKS, 2023, 167 : 223 - 232