Non-dissipative Propagation by Randomized Anti-symmetric Deep Graph Networks

被引:0
作者
Gravina, Alessio [1 ]
Gallicchio, Claudio [1 ]
Bacciu, Davide [1 ]
机构
[1] Univ Pisa, Dept Comp Sci, I-56127 Pisa, Italy
来源
MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2023, PT V | 2025年 / 2137卷
关键词
deep graph networks; graph neural network; neural ode; randomized neural networks;
D O I
10.1007/978-3-031-74643-7_3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Graph Networks (DGNs) currently dominate the research landscape of learning from graphs, due to the efficiency of their adaptive message-passing scheme between nodes. However, DGNs are typically afflicted by a distortion in the information flowing from distant nodes (i.e., over-squashing) that limit their ability to learn long-range dependencies. This reduces their effectiveness, since predictive problems may require to capture interactions at different, and possibly large, radii in order to be effectively solved. We focus on Anti-symmetric Deep Graph Networks (A-DGNs), a recently proposed neural architecture for learning from graphs. A-DGNs are designed based on stable and non-dissipative ordinary differential equations, with a key architectural design based on an anti-symmetric structure of the internal weights. In this paper, we investigate the merits of the resulting architectural bias by incorporating randomized internal connections in node embedding computations and by restricting the training algorithms to operate exclusively at the output layer. To empirically validate our approach, we conduct experiments on various graph benchmarks, demonstrating the effectiveness of the proposed approach in learning from graph data.
引用
收藏
页码:25 / 36
页数:12
相关论文
共 23 条
[1]  
Alon Uri, 2021, INT C LEARN REPR
[2]  
Ascher U. M., 1995, Numerical Solution of Boundary Value Problems for Ordinary Differential Equations, DOI [10.1137/1.9781611971231, DOI 10.1137/1.9781611971231]
[3]  
Chamberlain BP, 2021, PR MACH LEARN RES, V139
[4]  
Chang B., 2019, INT C LEARN REPR
[5]  
Chen M., 2020, PMLR, P1725
[6]  
Chen RTQ, 2018, ADV NEURAL INFORM PR, V31, DOI DOI 10.48550/ARXIV.1806.07366
[7]  
Corso G, 2020, ADV NEUR IN, V33
[8]  
Gallicchio C., 2020, Deep randomized neural networks, P43, DOI [10.1007/978-3-030-43883-8, DOI 10.1007/978-3-030-43883-8]
[9]  
Gallicchio C, 2020, AAAI CONF ARTIF INTE, V34, P3898
[10]  
Gilmer J, 2017, PR MACH LEARN RES, V70