Continuous-time dynamic graph learning based on spatio-temporal random walks

被引:0
作者
Sheng, Jinfang [1 ]
Zhang, Yifan [1 ]
Wang, Bin [1 ]
机构
[1] Cent South Univ, Sch Comp Sci & Engn, 932 South Lushan Rd, Changsha 410083, Hunan, Peoples R China
关键词
Dynamic graphs; Continuous-time dynamic graphs; Graph representation learning; Random walk;
D O I
10.1007/s11227-024-06881-5
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The application of dynamic graph representation learning in the processing of dynamic systems, such as social networks and transportation networks, has increased in recent times due to its ability to efficiently integrate topological and temporal information into a compact representation. Continuous-time dynamic graphs (CTDGs) have received considerable attention due to their capacity to retain precise temporal information. Existing methods based on random walk techniques often use time-biased sampling to extract dynamic graph patterns, neglecting the topological properties of the graph. Additionally, previous anonymous walks do not share node identifiers, failing to fully leverage the correlations between network patterns, which play a crucial role in predicting future interactions. Consequently, this study focuses on methods related to CTDGs. This paper presents a novel continuous-time dynamic graph learning method based on spatio-temporal random walks, which makes three main contributions: (i) By considering temporal constraints and topological structures, our method extracts diverse expressive patterns from CTDGs; (ii) It introduces the hitting counts of the nodes at a certain position as the node's relative identity. This approach fully leverages the correlation of network patterns, ensuring that the pattern structure remains consistent even after removing node identities; (iii) An attention mechanism is employed to aggregate walk encodings, allowing the importance of different walks to be distinguished. This facilitates a more precise delineation of the relationships and structural attributes between nodes, thereby enhancing the precision and expressive power of node representations. The proposed method demonstrates superior performance compared to the average strongest baseline, achieving gains of 2.72% and 2.46% in all transductive and inductive link prediction tasks, respectively. Additionally, it attains up to an 8.7% improvement on specific datasets. Furthermore, it exhibits the second best overall performance in dynamic node classification tasks.
引用
收藏
页数:30
相关论文
共 38 条
[21]  
Souza A. H., 2022, ADV NEUR IN
[22]   Dual Dynamic Spatial-Temporal Graph Convolution Network for Traffic Prediction [J].
Sun, Yanfeng ;
Jiang, Xiangheng ;
Hu, Yongli ;
Duan, Fuqing ;
Guo, Kan ;
Wang, Boyue ;
Gao, Junbin ;
Yin, Baocai .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (12) :23680-23693
[23]  
Trivedi R., 2019, INT C LEARN REPR, P1
[24]  
Trivedi R, 2017, PR MACH LEARN RES, V70
[25]  
Vaswani A, 2017, ADV NEUR IN, V30
[26]  
Velickovic P., 2018, INT C LEARN REPR, P1, DOI DOI 10.48550/ARXIV.1710.10903
[27]   Streaming Graph Neural Networks via Generative Replay [J].
Wang, Junshan ;
Zhu, Wenhao ;
Song, Guojie ;
Wang, Liang .
PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, :1878-1888
[28]  
Wang L, 2021, Arxiv, DOI arXiv:2105.07944
[29]   Heterogeneous Graph Attention Network [J].
Wang, Xiao ;
Ji, Houye ;
Shi, Chuan ;
Wang, Bai ;
Cui, Peng ;
Yu, P. ;
Ye, Yanfang .
WEB CONFERENCE 2019: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2019), 2019, :2022-2032
[30]   APAN: Asynchronous Propagation Attention Network for Real-time Temporal Graph Embedding [J].
Wang, Xuhong ;
Lyu, Ding ;
Li, Mengjian ;
Xia, Yang ;
Yang, Qi ;
Wang, Xinwen ;
Wang, Xinguang ;
Cui, Ping ;
Yang, Yupu ;
Sun, Bowen ;
Guo, Zhenyu .
SIGMOD '21: PROCEEDINGS OF THE 2021 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA, 2021, :2628-2638