MaskAAE: Latent space optimization for Adversarial Auto-Encoders

被引:0
作者
Mondal, Arnab Kumar [1 ]
Chowdhury, Sankalan Pal [1 ]
Jayendran, Aravind [1 ,2 ]
Singla, Parag [1 ]
Asnani, Himanshu [3 ]
Prathosh, A. P. [1 ]
机构
[1] IIT Delhi, Delhi, India
[2] Flipkart Internet Pvt Ltd, Bengaluru, Karnataka, India
[3] TIFR, Mumbai, Maharashtra, India
来源
CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI 2020) | 2020年 / 124卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The field of neural generative models is dominated by the highly successful Generative Adversarial Networks (GANs) despite their challenges, such as training instability and mode collapse. Auto-Encoders (AE) with regularized latent space provide an alternative framework for generative models, albeit their performance levels have not reached that of GANs. In this work, we hypothesise that the dimensionality of the AE model's latent space has a critical effect on the quality of generated data. Under the assumption that nature generates data by sampling from a "true" generative latent space followed by a deterministic function, we show that the optimal performance is obtained when the dimensionality of the latent space of the AE-model matches with that of the "true" generative latent space. Further, we propose an algorithm called the Mask Adversarial Auto-Encoder (MaskAAE), in which the dimensionality of the latent space of an adversarial auto encoder is brought closer to that of the "true" generative latent space, via a procedure to mask the spurious latent dimensions. We demonstrate through experiments on synthetic and several real-world datasets that the proposed formulation yields betterment in the generation quality.
引用
收藏
页码:689 / 698
页数:10
相关论文
共 50 条
[41]   HSAE: A Hessian regularized sparse auto-encoders [J].
Liu, Weifeng ;
Ma, Tengzhou ;
Tao, Dapeng ;
You, Jane .
NEUROCOMPUTING, 2016, 187 :59-65
[42]   Feature Selection using Multiple Auto-Encoders [J].
Guo, Xinyu ;
Minai, Ali A. ;
Lu, Long J. .
2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, :4602-4609
[43]   Sparse Wavelet Auto-Encoders for Image classification [J].
Hassairi, Salima ;
Ejbali, Ridha ;
Zaied, Mourad .
2016 INTERNATIONAL CONFERENCE ON DIGITAL IMAGE COMPUTING: TECHNIQUES AND APPLICATIONS (DICTA), 2016, :625-630
[44]   Stacked convolutional auto-encoders for single space target image blind deconvolution [J].
Gao, Zhisheng ;
Shen, Chen ;
Xie, Chunzhi .
NEUROCOMPUTING, 2018, 313 :295-305
[45]   HGATE: Heterogeneous Graph Attention Auto-Encoders [J].
Wang, Wei ;
Suo, Xiaoyang ;
Wei, Xiangyu ;
Wang, Bin ;
Wang, Hao ;
Dai, Hong-Ning ;
Zhang, Xiangliang .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (04) :3938-3951
[46]   A hybrid learning model based on auto-encoders [J].
Zhou, Ju ;
Ju, Li ;
Zhang, Xiaolong .
PROCEEDINGS OF THE 2017 12TH IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA), 2017, :522-528
[47]   Complete Stacked Denoising Auto-Encoders for Regression [J].
María-Elena Fernández-García ;
José-Luis Sancho-Gómez ;
Antonio Ros-Ros ;
Aníbal R. Figueiras-Vidal .
Neural Processing Letters, 2021, 53 :787-797
[48]   Comparison of Auto-encoders with Different Sparsity Regularizers [J].
Zhang, Li ;
Lu, Yaping .
2015 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2015,
[49]   Clustering Trajectories via Sparse Auto-encoders [J].
Wu, Xiaofeng ;
Zhang, Rui ;
Li, Lin .
2021 IEEE 4TH INTERNATIONAL CONFERENCE ON MULTIMEDIA INFORMATION PROCESSING AND RETRIEVAL, MIPR, 2021, :260-266
[50]   Self-Supervised Variational Auto-Encoders [J].
Gatopoulos, Ioannis ;
Tomczak, Jakub M. .
ENTROPY, 2021, 23 (06)