Improving Gaussian mixture latent variable model convergence with Optimal Transport

被引:0
作者
Gaujac, Benoit [1 ]
Feige, Ilya
Barber, David [1 ]
机构
[1] UCL, London, England
来源
ASIAN CONFERENCE ON MACHINE LEARNING, VOL 157 | 2021年 / 157卷
关键词
Optimal Transport; Wasserstein Autoencoder; Variational Autoencoder; Latent variable modeling; Generative modeling;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Generative models with both discrete and continuous latent variables are highly motivated by the structure of many real-world data sets. They present, however, subtleties in training often manifesting in the discrete latent variable not being leveraged. In this paper, we show why such models struggle to train using traditional log-likelihood maximization, and that they are amenable to training using the Optimal Transport framework of Wasserstein Autoencoders. We find our discrete latent variable to be fully leveraged by the model when trained, without any modifications to the objective function or significant fine tuning. Our model generates comparable samples to other approaches while using relatively simple neural networks, since the discrete latent variable carries much of the descriptive burden. Furthermore, the discrete latent provides significant control over generation.
引用
收藏
页码:737 / 752
页数:16
相关论文
共 33 条
[1]  
Arjovsky M., 2017, Towards Principled Methods for Training Generative Adversarial Networks
[2]  
Arjovsky M, 2017, PR MACH LEARN RES, V70
[3]  
Bousquet O., 2017, arXiv
[4]  
Brooks S, 2011, CH CRC HANDB MOD STA, pXIX
[5]  
Cuturi Marco, 2013, Advances in neural information processing systems
[6]  
Dilokthanakul N., 2016, arXiv
[7]  
Dziugaite GK, 2015, UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, P258
[8]  
Eslami S. M., 2016, Advances in neural information processing systems, V29
[9]  
Genevay A, 2018, PR MACH LEARN RES, V84
[10]  
Glorot X., 2011, P 14 INT C ARTIFICIA, P315