Hierarchical Representation Learning in Graph Neural Networks With Node Decimation Pooling

被引:27
作者
Bianchi, Filippo Maria [1 ,2 ]
Grattarola, Daniele [3 ]
Livi, Lorenzo [4 ,5 ]
Alippi, Cesare [3 ,6 ]
机构
[1] UiT Arctic Univ Norway, Dept Math & Stat, N-9019 Tromso, Norway
[2] Norwegian Res Ctr NORCE, N-5008 Bergen, Norway
[3] Univ Svizzera Italiana, Fac Informat, CH-6904 Lugano, Switzerland
[4] Univ Manitoba, Dept Comp Sci & Math, Winnipeg, MB R3T 2N2, Canada
[5] Univ Exeter, Dept Comp Sci, Exeter EX4 4QJ, Devon, England
[6] Politecn Milan, Dept Elect Informat & Bioengn, I-20133 Milan, Italy
基金
瑞士国家科学基金会;
关键词
Laplace equations; Partitioning algorithms; Symmetric matrices; Clustering algorithms; Training; Topology; Task analysis; Graph classification; graph neural networks (GNNs); graph pooling; Kron reduction; Maxcut optimization; CUT;
D O I
10.1109/TNNLS.2020.3044146
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In graph neural networks (GNNs), pooling operators compute local summaries of input graphs to capture their global properties, and they are fundamental for building deep GNNs that learn hierarchical representations. In this work, we propose the Node Decimation Pooling (NDP), a pooling operator for GNNs that generates coarser graphs while preserving the overall graph topology. During training, the GNN learns new node representations and fits them to a pyramid of coarsened graphs, which is computed offline in a preprocessing stage. NDP consists of three steps. First, a node decimation procedure selects the nodes belonging to one side of the partition identified by a spectral algorithm that approximates the MAXCUT solution. Afterward, the selected nodes are connected with Kron reduction to form the coarsened graph. Finally, since the resulting graph is very dense, we apply a sparsification procedure that prunes the adjacency matrix of the coarsened graph to reduce the computational cost in the GNN. Notably, we show that it is possible to remove many edges without significantly altering the graph structure. Experimental results show that NDP is more efficient compared to state-of-the-art graph pooling operators while reaching, at the same time, competitive performance on a significant variety of graph classification tasks.
引用
收藏
页码:2195 / 2207
页数:13
相关论文
共 57 条
[1]  
[Anonymous], 2004, FUNDAMENTALS MATRIX
[2]   A Non-negative Factorization Approach to Node Pooling in Graph Convolutional Neural Networks [J].
Bacciu, Davide ;
Di Sotto, Luigi .
ADVANCES IN ARTIFICIAL INTELLIGENCE, AI*IA 2019, 2019, 11946 :294-306
[3]  
Bai L., 2019, ARXIV190209936
[4]   Learning Backtrackless Aligned-Spatial Graph Convolutional Networks for Graph Classification [J].
Bai, Lu ;
Cui, Lixin ;
Jiao, Yuhang ;
Rossi, Luca ;
Hancock, Edwin R. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (02) :783-798
[5]   Deep depth-based representations of graphs through deep learning networks [J].
Bai, Lu ;
Cui, Lixin ;
Bai, Xiao ;
Hancock, Edwin R. .
NEUROCOMPUTING, 2019, 336 :3-12
[6]  
Bai S., 2018, CoRR abs/1803.01271
[7]   Spectral Sparsification of Graphs: Theory and Algorithms [J].
Batson, Joshua ;
Spielman, Daniel A. ;
Srivastava, Nikhil ;
Teng, Shang-Hua .
COMMUNICATIONS OF THE ACM, 2013, 56 (08) :87-94
[8]  
Bianchi F. M., 2019, ARXIV190101343
[9]  
Bianchi FM, 2020, PR MACH LEARN RES, V119
[10]   An agent-based algorithm exploiting multiple local dissimilarities for clusters mining and knowledge discovery [J].
Bianchi, Filippo Maria ;
Maiorino, Enrico ;
Livi, Lorenzo ;
Rizzi, Antonello ;
Sadeghian, Alireza .
SOFT COMPUTING, 2017, 21 (05) :1347-1369