GPENs: Graph Data Learning With Graph Propagation-Embedding Networks

被引:10
作者
Jiang, Bo [1 ]
Wang, Leiling [1 ]
Cheng, Jian [2 ]
Tang, Jin [1 ]
Luo, Bin [1 ]
机构
[1] Anhui Univ, Anhui Prov Key Lab Multimodal Cognit Computat, Sch Comp Sci & Technol, Hefei 230601, Peoples R China
[2] Chinese Acad Sci, Inst Automat, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Task analysis; Computer architecture; Semisupervised learning; Deep learning; Laplace equations; Data models; Labeling; Graph embedding; graph neural networks (GNNs); graph propagation; semi-supervised learning; LABEL PROPAGATION; FRAMEWORK;
D O I
10.1109/TNNLS.2021.3120100
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Compact representation of graph data is a fundamental problem in pattern recognition and machine learning area. Recently, graph neural networks (GNNs) have been widely studied for graph-structured data representation and learning tasks, such as graph semi-supervised learning, clustering, and low-dimensional embedding. In this article, we present graph propagation-embedding networks (GPENs), a new model for graph-structured data representation and learning problem. GPENs are mainly motivated by 1) revisiting of traditional graph propagation techniques for graph node context-aware feature representation and 2) recent studies on deeply graph embedding and neural network architecture. GPENs integrate both feature propagation on graph and low-dimensional embedding simultaneously into a unified network using a novel propagation-embedding architecture. GPENs have two main advantages. First, GPENs can be well-motivated and explained from feature propagation and deeply learning architecture. Second, the equilibrium representation of the propagation-embedding operation in GPENs has both exact and approximate formulations, both of which have simple closed-form solutions. This guarantees the compactivity and efficiency of GPENs. Third, GPENs can be naturally extended to multiple GPENs (M-GPENs) to address the data with multiple graph structures. Experiments on various semi-supervised learning tasks on several benchmark datasets demonstrate the effectiveness and benefits of the proposed GPENs and M-GPENs.
引用
收藏
页码:3925 / 3938
页数:14
相关论文
共 60 条
[1]  
Atwood J, 2016, ADV NEUR IN, V29
[2]  
Belkin M, 2006, J MACH LEARN RES, V7, P2399
[3]  
Bruna J, 2014, P 2 INT C LEARN REPR, DOI DOI 10.48550/ARXIV.1312.6203
[4]   Graph Regularized Nonnegative Matrix Factorization for Data Representation [J].
Cai, Deng ;
He, Xiaofei ;
Han, Jiawei ;
Huang, Thomas S. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2011, 33 (08) :1548-1560
[5]   How Do the Open Source Communities Address Usability and UX Issues? An Exploratory Study [J].
Cheng, Jinghui ;
Guo, Jin L. C. .
CHI 2018: EXTENDED ABSTRACTS OF THE 2018 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2018,
[6]  
Defferrard M, 2016, ADV NEUR IN, V29
[7]   Diffusion Processes for Retrieval Revisited [J].
Donoser, Michael ;
Bischof, Horst .
2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, :1320-1327
[8]  
Gasteiger J., 2019, INT C LEARN REPR ICL, P1
[9]  
Glorot X, 2010, P 13 INT C ART INT S, P249, DOI DOI 10.1109/LGRS.2016.2565705
[10]   Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules [J].
Gomez-Bombarelli, Rafael ;
Wei, Jennifer N. ;
Duvenaud, David ;
Hernandez-Lobato, Jose Miguel ;
Sanchez-Lengeling, Benjamin ;
Sheberla, Dennis ;
Aguilera-Iparraguirre, Jorge ;
Hirzel, Timothy D. ;
Adams, Ryan P. ;
Aspuru-Guzik, Alan .
ACS CENTRAL SCIENCE, 2018, 4 (02) :268-276