Propagation Enhanced Neural Message Passing for Graph Representation Learning

被引:26
作者
Fan, Xiaolong [1 ]
Gong, Maoguo [1 ]
Wu, Yue [2 ]
Qin, A. K. [3 ]
Xie, Yu [4 ]
机构
[1] Xidian Univ, Sch Elect Engn, Key Lab Intelligent Percept, Image Understanding,Minist Educ, Xian 710126, Shaanxi, Peoples R China
[2] Xidian Univ, Sch Comp Sci & Technol, Xian 710126, Shaanxi, Peoples R China
[3] Swinburne Univ Technol, Dept Comp Technol, Melbourne, VIC 3122, Australia
[4] Shanxi Univ, Key Lab Computat Intelligence & Chinese Informat P, Minist Educ, Taiyuan 030006, Peoples R China
基金
澳大利亚研究理事会; 中国国家自然科学基金;
关键词
Message passing; Aggregates; Task analysis; Data models; Predictive models; Graph neural networks; Adaptation models; Graph data mining; graph representation learning; graph neural network; neural message passing; NETWORK;
D O I
10.1109/TKDE.2021.3102964
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph Neural Network (GNN) is capable of applying deep neural networks to graph domains. Recently, Message Passing Neural Networks (MPNNs) have been proposed to generalize several existing graph neural networks into a unified framework. For graph representation learning, MPNNs first generate discriminative node representations using the message passing function and then read from the node representation space to generate a graph representation using the readout function. In this paper, we analyze the representation capacity of the MPNNs for aggregating graph information and observe that the existing approaches ignore the self-loop for graph representation learning, leading to limited representation capacity. To alleviate this issue, we introduce a simple yet effective propagation enhanced extension, Self-Connected Neural Message Passing (SC-NMP), which aggregates the node representations of the current step and the graph representation of the previous step. To further improve the information flow, we also propose a Densely Self-Connected Neural Message Passing (DSC-NMP) that connects each layer to every other layer in a feed-forward fashion. Both proposed architectures are applied at each layer and the graph representation can then be used as input into all subsequent layers. Remarkably, combining these two architectures with existing GNN variants can improve these models' performance for graph representation learning. Extensive experiments on various benchmark datasets strongly demonstrate the effectiveness, leading to superior performance for graph classification and regression tasks.
引用
收藏
页码:1952 / 1964
页数:13
相关论文
共 36 条
[1]  
[Anonymous], INT C LEARNING REPRE
[2]  
[Anonymous], 2019, P 7 INT C LEARNING R
[3]  
Backstrom L., 2011, P 4 ACM INT C WEB SE, P635
[4]   Position-Aware Deep Character-Level CTR Prediction for Sponsored Search [J].
Bai, Xiao ;
Abasi, Reza ;
Edizel, Bora ;
Mantrach, Amin .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2021, 33 (04) :1722-1736
[5]   Protein function prediction via graph kernels [J].
Borgwardt, KM ;
Ong, CS ;
Schönauer, S ;
Vishwanathan, SVN ;
Smola, AJ ;
Kriegel, HP .
BIOINFORMATICS, 2005, 21 :I47-I56
[6]  
Chen DL, 2020, AAAI CONF ARTIF INTE, V34, P3438
[7]  
Duvenaudt D, 2015, ADV NEUR IN, V28
[8]   Structured self-attention architecture for graph-level representation learning [J].
Fan, Xiaolong ;
Gong, Maoguo ;
Xie, Yu ;
Jiang, Fenlong ;
Li, Hao .
PATTERN RECOGNITION, 2020, 100
[9]   Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure [J].
Feng, Fuli ;
He, Xiangnan ;
Tang, Jie ;
Chua, Tat-Seng .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2021, 33 (06) :2493-2504
[10]  
Fey M, 2019, Arxiv, DOI [arXiv:1903.02428, DOI 10.48550/ARXIV.1903.02428]