Graph Representation Learning via Adversarial Variational Bayes

被引:2
作者
Li, Yunhe [1 ]
Hu, Yaochen [2 ]
Zhang, Yingxue [2 ]
机构
[1] Univ Montreal, Montreal, PQ, Canada
[2] Huawei Noahs Ark Lab, Montreal, PQ, Canada
来源
PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021 | 2021年
关键词
Graph Representation Learning; Adversarial Variational Bayes;
D O I
10.1145/3459637.3482116
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Methods that learn representations of nodes in a graph play an important role in network analysis. Most of the existing methods of graph representation learning have focused on embedding each node in a graph as a single vector in a low-dimensional continuous space. However, these methods have a crucial limitation: the lack of modeling the uncertainty about the representation. In this work, inspired by Adversarial Variational Bayes (AVB) [22], we propose GraphAVB, a probabilistic generative model to learn node representations that preserve connectivity patterns and capture the uncertainties in the graph. Unlike Graph2Gauss [3] which embeds each node as a Gaussian distribution, we represent each node as an implicit distribution parameterized by a neural network in the latent space, which is more flexible and expressive to capture the complex uncertainties in real-world graph-structured datasets. To perform the designed variational inference algorithm with neural samplers, we introduce an auxiliary discriminative network that is used to infer the log probability ratio terms in the objective function and allows us to cast maximizing the objective function as a two-player game. Experimental results on multiple real-world graph datasets demonstrate the effectiveness of our proposed method GraphAVB, outperforming many competitive baselines on the task of link prediction. The superior performances of our proposed method GraphAVB also demonstrate that the downstream tasks can benefit from the captured uncertainty.
引用
收藏
页码:3237 / 3241
页数:5
相关论文
共 42 条
  • [1] Ahmed A., 2013, WWW 2013, P37, DOI [10.1145/2488388.2488393, DOI 10.1145/2488388.2488393]
  • [2] [Anonymous], 2017, ADV NEURAL INFORM PR
  • [3] [Anonymous], 2014, Word representations via gaussian embedding
  • [4] [Anonymous], 2016, ARXIV161106645
  • [5] Variational Inference: A Review for Statisticians
    Blei, David M.
    Kucukelbir, Alp
    McAuliffe, Jon D.
    [J]. JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2017, 112 (518) : 859 - 877
  • [6] Bojchevski A., 2018, P 6 INT C LEARN REPR, P1
  • [7] Brazinskas Arthur, 2017, ARXIV171111027
  • [8] Cao S., 2015, P 24 ACM INT C INF K, P891, DOI DOI 10.1145/2806416.2806512
  • [9] On the learnability and design of output codes for multiclass problems
    Crammer, K
    Singer, Y
    [J]. MACHINE LEARNING, 2002, 47 (2-3) : 201 - 233
  • [10] Dai Q, 2017, ARXIV171107838