Self-attentive Rationalization for Interpretable Graph Contrastive Learning

被引:1
作者
Li, Sihang [1 ]
Luo, Yanchen [1 ]
Zhang, An [2 ]
Wang, Xiang [1 ]
Li, Longfei [3 ]
Zhou, Jun [3 ]
Chua, Tat-seng [2 ]
机构
[1] Univ Sci & Technol China, Hefei, Peoples R China
[2] Natl Univ Singapore, Singapore, Singapore
[3] Ant Grp, Hangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Self-supervised learning; interpretability; graph contrastive learning; self-attention mechanism;
D O I
10.1145/3665894
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Graph augmentation is the key component to reveal instance-discriminative features of a graph as its rationale- an interpretation for it-in graph contrastive learning (GCL). Existing rationale-aware augmentation mechanisms in GCL frameworks roughly fall into two categories and suffer from inherent limitations: (1) non-heuristic methods with the guidance of domain knowledge to preserve salient features, which require expensive expertise and lack generality, or (2) heuristic augmentations with a co-trained auxiliary model to identify crucial substructures, which face not only the dilemma between system complexity and transformation diversitybut also the instability stemming from the co-training of two separated sub-models. Inspired by recent studies on transformers, we propose self-attentive rationale-guided GCL (SR-GCL), which integrates rationale generator and encoder together, leverages the self-attention values in transformer module as a natural guidance to delineate semantically informative substructures from both node- and edge-wise perspectives, and contrasts on rationale-aware augmented pairs. On real-world biochemistry datasets, visualization results verify the effectiveness and interpretability of self-attentive rationalization, and the performance on downstream tasks demonstrates the state-of-the-art performance of SR-GCL for graph model pre-training. Codes are available at https://github.com/lsh0520/SR-GCL.
引用
收藏
页数:21
相关论文
共 50 条
[1]   Self-Attentive Contrastive Learning for Conditioned Periocular and Face Biometrics [J].
Ng, Tiong-Sik ;
Chai, Jacky Chen Long ;
Low, Cheng-Yaw ;
Teoh, Andrew Beng Jin .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 :3251-3264
[2]   Self-attentive deep learning method for online traffic classification and its interpretability [J].
Xie, Guorui ;
Li, Qing ;
Jiang, Yong .
COMPUTER NETWORKS, 2021, 196
[3]   Semi-Supervised Dual-Stream Self-Attentive Adversarial Graph Contrastive Learning for Cross-Subject EEG-Based Emotion Recognition [J].
Ye, Weishan ;
Zhang, Zhiguo ;
Teng, Fei ;
Zhang, Min ;
Wang, Jianhong ;
Ni, Dong ;
Li, Fali ;
Xu, Peng ;
Liang, Zhen .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2025, 16 (01) :290-305
[4]   Lightweight Self-Attentive Sequential Recommendation [J].
Li, Yang ;
Chen, Tong ;
Zhang, Peng-Fei ;
Yin, Hongzhi .
PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, :967-977
[5]   An Interpretable Brain Graph Contrastive Learning Framework for Brain Disorder Analysis [J].
Luo, Xuexiong ;
Dong, Guangwei ;
Wu, Jia ;
Beheshti, Amin ;
Yang, Jian ;
Xue, Shan .
PROCEEDINGS OF THE 17TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, WSDM 2024, 2024, :1074-1077
[6]   Causal invariance guides interpretable graph contrastive learning in fMRI analysis [J].
Wei, Boyang ;
Zeng, Weiming ;
Shi, Yuhu ;
Zhang, Hua .
ALEXANDRIA ENGINEERING JOURNAL, 2025, 117 :635-647
[7]   Self-Attentive Moving Average for Time Series Prediction [J].
Su, Yaxi ;
Cui, Chaoran ;
Qu, Hao .
APPLIED SCIENCES-BASEL, 2022, 12 (07)
[8]   Multi-Layer Self-Attentive Sequential Recommendation [J].
Jiang, Xiaohui .
2024 9TH INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING AND SIGNAL PROCESSING, ICSP, 2024, :1205-1210
[9]   DarknetSec: A novel self-attentive deep learning method for darknet traffic classification and application identification [J].
Lan, Jinghong ;
Liu, Xudong ;
Li, Bo ;
Li, Yanan ;
Geng, Tongtong .
COMPUTERS & SECURITY, 2022, 116
[10]   Graph Communal Contrastive Learning [J].
Li, Bolian ;
Jing, Baoyu ;
Tong, Hanghang .
PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, :1203-1213