Self-attentive Rationalization for Interpretable Graph Contrastive Learning

被引:0
|
作者
Li, Sihang [1 ]
Luo, Yanchen [1 ]
Zhang, An [2 ]
Wang, Xiang [1 ]
Li, Longfei [3 ]
Zhou, Jun [3 ]
Chua, Tat-seng [2 ]
机构
[1] Univ Sci & Technol China, Hefei, Peoples R China
[2] Natl Univ Singapore, Singapore, Singapore
[3] Ant Grp, Hangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Self-supervised learning; interpretability; graph contrastive learning; self-attention mechanism;
D O I
10.1145/3665894
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Graph augmentation is the key component to reveal instance-discriminative features of a graph as its rationale- an interpretation for it-in graph contrastive learning (GCL). Existing rationale-aware augmentation mechanisms in GCL frameworks roughly fall into two categories and suffer from inherent limitations: (1) non-heuristic methods with the guidance of domain knowledge to preserve salient features, which require expensive expertise and lack generality, or (2) heuristic augmentations with a co-trained auxiliary model to identify crucial substructures, which face not only the dilemma between system complexity and transformation diversitybut also the instability stemming from the co-training of two separated sub-models. Inspired by recent studies on transformers, we propose self-attentive rationale-guided GCL (SR-GCL), which integrates rationale generator and encoder together, leverages the self-attention values in transformer module as a natural guidance to delineate semantically informative substructures from both node- and edge-wise perspectives, and contrasts on rationale-aware augmented pairs. On real-world biochemistry datasets, visualization results verify the effectiveness and interpretability of self-attentive rationalization, and the performance on downstream tasks demonstrates the state-of-the-art performance of SR-GCL for graph model pre-training. Codes are available at https://github.com/lsh0520/SR-GCL.
引用
收藏
页数:21
相关论文
共 50 条
  • [1] Self-Attentive Contrastive Learning for Conditioned Periocular and Face Biometrics
    Ng, Tiong-Sik
    Chai, Jacky Chen Long
    Low, Cheng-Yaw
    Teoh, Andrew Beng Jin
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 3251 - 3264
  • [2] Self-attentive deep learning method for online traffic classification and its interpretability
    Xie, Guorui
    Li, Qing
    Jiang, Yong
    COMPUTER NETWORKS, 2021, 196
  • [3] Lightweight Self-Attentive Sequential Recommendation
    Li, Yang
    Chen, Tong
    Zhang, Peng-Fei
    Yin, Hongzhi
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 967 - 977
  • [4] An Interpretable Brain Graph Contrastive Learning Framework for Brain Disorder Analysis
    Luo, Xuexiong
    Dong, Guangwei
    Wu, Jia
    Beheshti, Amin
    Yang, Jian
    Xue, Shan
    PROCEEDINGS OF THE 17TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, WSDM 2024, 2024, : 1074 - 1077
  • [5] Causal invariance guides interpretable graph contrastive learning in fMRI analysis
    Wei, Boyang
    Zeng, Weiming
    Shi, Yuhu
    Zhang, Hua
    ALEXANDRIA ENGINEERING JOURNAL, 2025, 117
  • [6] Self-Attentive Moving Average for Time Series Prediction
    Su, Yaxi
    Cui, Chaoran
    Qu, Hao
    APPLIED SCIENCES-BASEL, 2022, 12 (07):
  • [7] DarknetSec: A novel self-attentive deep learning method for darknet traffic classification and application identification
    Lan, Jinghong
    Liu, Xudong
    Li, Bo
    Li, Yanan
    Geng, Tongtong
    COMPUTERS & SECURITY, 2022, 116
  • [8] Graph Communal Contrastive Learning
    Li, Bolian
    Jing, Baoyu
    Tong, Hanghang
    PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, : 1203 - 1213
  • [9] Adaptive Graph Augmentation for Graph Contrastive Learning
    Wang, Zeming
    Li, Xiaoyang
    Wang, Rui
    Zheng, Changwen
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, ICIC 2023, PT IV, 2023, 14089 : 354 - 366
  • [10] Robust Hypergraph-Augmented Graph Contrastive Learning for Graph Self-Supervised Learning
    Wang, Zeming
    Li, Xiaoyang
    Wang, Rui
    Zheng, Changwen
    2023 2ND ASIA CONFERENCE ON ALGORITHMS, COMPUTING AND MACHINE LEARNING, CACML 2023, 2023, : 287 - 293