An Introduction to Variational Autoencoders

被引:1470
作者
Kingma, Diederik P. [1 ]
Welling, Max [2 ,3 ]
机构
[1] Google, Mountain View, CA 94043 USA
[2] Univ Amsterdam, Amsterdam, Netherlands
[3] Qualcomm, San Diego, CA USA
来源
FOUNDATIONS AND TRENDS IN MACHINE LEARNING | 2019年 / 12卷 / 04期
关键词
GRADIENT; LIKELIHOOD; ALGORITHMS; MODELS;
D O I
10.1561/2200000056
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Variational autoencoders provide a principled framework for learning deep latent-variable models and corresponding inference models. In this work, we provide an introduction to variational autoencoders and some important extensions.
引用
收藏
页码:4 / 89
页数:86
相关论文
共 139 条
[71]  
Heess N, 2015, ADV NEUR IN, V28
[72]  
Hernandez-Lobato Jose Miguel., 2016, Black-box alpha-divergence minimization
[73]  
Higgins I., 2017, INT C LEARNING REPRE, V2, P1
[74]   THE WAKE-SLEEP ALGORITHM FOR UNSUPERVISED NEURAL NETWORKS [J].
HINTON, GE ;
DAYAN, P ;
FREY, BJ ;
NEAL, RM .
SCIENCE, 1995, 268 (5214) :1158-1161
[75]  
Hochreiter S, 1997, Neural Computation, V9, P1735
[76]  
Hoffman M. D., 2016, NEURIPS WORKSH
[77]  
Hoffman MD, 2013, J MACH LEARN RES, V14, P1303
[78]  
Houthooft Rein, 2016, Advances in Neural Information Processing Systems, V29
[79]  
Jang E., 2017, INT C LEARN REPRESEN
[80]  
Johnson MJ, 2016, ADV NEUR IN, V29