Variational Autoencoders Pursue PCA Directions (by Accident)

被引:54
作者
Rolinek, Michal [1 ]
Zietlow, Dominik [1 ]
Martius, Georg [1 ]
机构
[1] Max Planck Inst Intelligent Syst, Tubingen, Germany
来源
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019) | 2019年
关键词
D O I
10.1109/CVPR.2019.01269
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The Variational Autoencoder (VAE) is a powerful architecture capable of representation learning and generative modeling. When it comes to learning interpretable (disentangled) representations, VAE and its variants show unparalleled performance. However, the reasons for this are unclear, since a very particular alignment of the latent embedding is needed but the design of the VAE does not encourage it in any explicit way. We address this matter and offer the following explanation: the diagonal approximation in the encoder together with the inherent stochasticity force local orthogonality of the decoder. The local behavior of promoting both reconstruction and orthogonality matches closely how the PCA embedding is chosen. Alongside providing an intuitive understanding, we justify the statement with full theoretical analysis as well as with experiments.
引用
收藏
页码:12398 / 12407
页数:10
相关论文
共 43 条
[1]  
Alemi A., 2018, INT C MACHINE LEARNI, P159
[2]  
[Anonymous], ABS160408772 ARXIV
[3]  
[Anonymous], ABS170902349 ARXIV
[4]  
[Anonymous], ABS180704742 ARXIV
[5]  
[Anonymous], 1987, AUTOASSOCIATION MULT
[6]  
[Anonymous], 2019, ABS190305789 ARXIV
[7]  
[Anonymous], ABS12105474 ARXIV
[8]  
[Anonymous], ABS170605148 ARXIV
[9]  
[Anonymous], 2014, ICML
[10]  
[Anonymous], ABS170301925 ARXIV