Time-aware Graph Structure Learning via Sequence Prediction on Temporal Graphs

被引:2
作者
Zhang, Haozhen [1 ]
Han, Xueting [2 ]
Xiao, Xi [1 ]
Bai, Jing [2 ]
机构
[1] Tsinghua Univ, Shenzhen Int Grad Sch, Shenzhen, Peoples R China
[2] Microsoft Res Asia, Beijing, Peoples R China
来源
PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023 | 2023年
关键词
Temporal Graphs; Graph Structure Learning; Contrastive Learning; Self-supervised Learning;
D O I
10.1145/3583780.3615081
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Temporal Graph Learning, which aims to model the time-evolving nature of graphs, has gained increasing attention and achieved remarkable performance recently. However, in reality, graph structures are often incomplete and noisy, which hinders temporal graph networks (TGNs) from learning informative representations. Graph contrastive learning uses data augmentation to generate plausible variations of existing data and learn robust representations. However, rule-based augmentation approaches may be suboptimal as they lack learnability and fail to leverage rich information from downstream tasks. To address these issues, we propose a Time-aware Graph Structure Learning (TGSL) approach via sequence prediction on temporal graphs, which learns better graph structures for downstream tasks through adding potential temporal edges. In particular, it predicts time-aware context embedding based on previously observed interactions and uses the Gumble-Top-K to select the closest candidate edges to this context embedding. Additionally, several candidate sampling strategies are proposed to ensure both efficiency and diversity. Furthermore, we jointly learn the graph structure and TGNs in an end-to-end manner and perform inference on the refined graph. Extensive experiments on temporal link prediction benchmarks demonstrate that TGSL yields significant gains for the popular TGNs such as TGAT and GraphMixer, and it out-performs other contrastive learning methods on temporal graphs. We release the code at https://github.com/ViktorAxelsen/TGSL.
引用
收藏
页码:3288 / 3297
页数:10
相关论文
共 46 条
  • [1] [Anonymous], 2020, C NEUR INF PROC SYST, DOI DOI 10.23919/ACC45564.2020.9147814
  • [2] [Anonymous], 2017, INT WORKSH REPR LEAR
  • [3] Biswal P., 2019, 7 INT C LEARN REPR I
  • [4] Cong Weilin, 2023, INT C LEARN REPR, P1
  • [5] Fatemi B, 2021, ADV NEUR IN
  • [6] Graves A, 2012, STUD COMPUT INTELL, V385, P1, DOI [10.1162/neco.1997.9.1.1, 10.1007/978-3-642-24797-2]
  • [7] Hamilton Will, 2017, C NEUR INF PROC SYST
  • [8] Hassani K, 2020, PR MACH LEARN RES, V119
  • [9] Momentum Contrast for Unsupervised Visual Representation Learning
    He, Kaiming
    Fan, Haoqi
    Wu, Yuxin
    Xie, Saining
    Girshick, Ross
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, : 9726 - 9735
  • [10] GraphMAE: Self-Supervised Masked Graph Autoencoders
    Hou, Zhenyu
    Liu, Xiao
    Cen, Yukuo
    Dong, Yuxiao
    Yang, Hongxia
    Wang, Chunjie
    Tang, Jie
    [J]. PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 594 - 604