Diffusion induced graph representation learning

被引:4
作者
Li, Fuzhen [1 ,2 ]
Zhu, Zhenfeng [1 ,2 ]
Zhang, Xingxing [1 ,2 ]
Cheng, Jian [3 ]
Zhao, Yao [1 ,2 ]
机构
[1] Beijing Jiaotong Univ, Inst Informat Sci, Beijing 100044, Peoples R China
[2] Beijing Key Lab Adv Informat Sci & Network Techno, Beijing 100044, Peoples R China
[3] Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Graph representation learning; Graph embedding; Diffusion model; Auto-encoder; Deep learning; DIMENSIONALITY; DISTRIBUTIONS;
D O I
10.1016/j.neucom.2019.06.012
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Nowadays, graph representation learning has aroused a lot of research interest, which aims to learn the latent low-dimensional representations of graph nodes, while preserving the graph structure. Based on the local smooth assumption, some existing methods have achieved significant success. However, although the structure information of data has been taken into consideration, these models fail to capture enough connectivity pattern such as high-order connections. To alleviate this issue, we propose a Graph Diffusion Network (GDN) that can dynamically preserve local and global consistency of graph. More specifically, Graph Diffusion Auto-encoder is utilized as the main framework in GDN to nonlinearly maintain global information volume. Different from simple auto-encoders, the forward propagation in our model is conducted through Graph Diffusion System which can guide the random walk of information flow to sense the high-order local relationships on graph. Furthermore, to discover a customized graph structure that reveals the similarities between nodes, the connection relationship between nodes are refined by learned metrics with the preservation of scale-free property. By the dynamically self-refining on the graph structure, it can be promoted towards learning the intrinsic node representations in a progressive way. Experimental results on node classification tasks demonstrate the effectiveness of the proposed GDN model. (C) 2019 Elsevier B.V. All rights reserved.
引用
收藏
页码:220 / 229
页数:10
相关论文
共 33 条
  • [1] Abadi Martin, 2016, arXiv
  • [2] [Anonymous], 2014, 20 ACM SIGKDD INT C, DOI DOI 10.1145/2623330.2623732
  • [3] Belkin M, 2002, ADV NEUR IN, V14, P585
  • [4] Power-Law Distributions in Empirical Data
    Clauset, Aaron
    Shalizi, Cosma Rohilla
    Newman, M. E. J.
    [J]. SIAM REVIEW, 2009, 51 (04) : 661 - 703
  • [5] Feng R., 2018, P 32 AAAI C ART INT
  • [6] node2vec: Scalable Feature Learning for Networks
    Grover, Aditya
    Leskovec, Jure
    [J]. KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, : 855 - 864
  • [7] Wavelets on graphs via spectral graph theory
    Hammond, David K.
    Vandergheynst, Pierre
    Gribonval, Remi
    [J]. APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS, 2011, 30 (02) : 129 - 150
  • [8] Reducing the dimensionality of data with neural networks
    Hinton, G. E.
    Salakhutdinov, R. R.
    [J]. SCIENCE, 2006, 313 (5786) : 504 - 507
  • [9] Ioffe S, 2015, 32 INT C MACH LEARN
  • [10] Jolliffe I. T., 1998, INT C DOC AN REC