SEGODE: a structure-enhanced graph neural ordinary differential equation network model for temporal link prediction

被引:0
作者
Fu, Jiale [1 ]
Guo, Xuan [2 ]
Hou, Jinlin [2 ]
Yu, Wei [1 ,2 ]
Shi, Hongjin [1 ]
Zhao, Yanxia [3 ]
机构
[1] Zhejiang Yuexiu Univ, Coll Int Business, Shaoxing 312069, Peoples R China
[2] Tianjin Univ, Coll Intelligence & Comp, Tianjin 300350, Peoples R China
[3] Zhejiang Yuexiu Univ, Finance Off, Shaoxing 312069, Peoples R China
关键词
Link prediction; Temporal network; Neural ordinary differential equation; Network representation learning;
D O I
10.1007/s10115-024-02261-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The objective of temporal link prediction is to forecast potential future connections in a network by analyzing its structural underpinnings and tracking its temporal dynamics. However, existing methods for temporal link prediction are overly reliant on the most recent snapshots of the network, thereby limiting their ability to uncover and utilize the fundamental evolutionary patterns for effective dynamical inference. As a result, the predictive prowess of these models tends to be heightened for proximate future scenarios, as opposed to those farther into the horizon. Furthermore, the majority of the current methodologies overlook the influence of intricate higher-order and overarching structural dynamics, which could potentially enhance predictive accuracy. To tackle these challenges, we introduce a structure-enhanced graph neural ordinary differential equation (SEGODE), a comprehensive framework that leverages neural ordinary differential equations integrated with attention mechanisms to facilitate dynamic inference. The framework enhances the ability to snatch higher-order and global structures. To substantiate the viability of our novel model, we embarked on a comprehensive set of experiments conducted on seven real datasets. The outcomes of these rigorous tests demonstrate that our SEGODE approach not just demonstrates commendable performance in the task of link prediction but additionally has good results even when data is sparse.
引用
收藏
页码:1713 / 1740
页数:28
相关论文
共 42 条
[1]  
ALLEE W. C., 1949
[2]   Emergence of scaling in random networks [J].
Barabási, AL ;
Albert, R .
SCIENCE, 1999, 286 (5439) :509-512
[3]   A 5TH-ORDER INTERPOLANT FOR THE DORMAND AND PRINCE RUNGE-KUTTA METHOD [J].
CALVO, M ;
MONTIJANO, JI ;
RANDEZ, L .
JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS, 1990, 29 (01) :91-100
[4]  
Chen RTQ, 2018, 32 C NEURAL INFORM P, V31
[5]   Low-Dimensional Invariant Embeddings for Universal Geometric Learning [J].
Dym, Nadav ;
Gortler, Steven J. .
FOUNDATIONS OF COMPUTATIONAL MATHEMATICS, 2025, 25 (02) :375-415
[6]   An introduction to ROC analysis [J].
Fawcett, Tom .
PATTERN RECOGNITION LETTERS, 2006, 27 (08) :861-874
[7]  
Gao JF, 2022, PR MACH LEARN RES
[8]   Universal resilience patterns in complex networks [J].
Gao, Jianxi ;
Barzel, Baruch ;
Barabasi, Albert-Laszlo .
NATURE, 2016, 530 (7590) :307-312
[9]  
Goyal P., 2018, ARXIV
[10]   dyngraph2vec: Capturing network dynamics using dynamic graph representation learning [J].
Goyal, Palash ;
Chhetri, Sujit Rokka ;
Canedo, Arquimedes .
KNOWLEDGE-BASED SYSTEMS, 2020, 187