MaskAAE: Latent space optimization for Adversarial Auto-Encoders

被引:0
作者
Mondal, Arnab Kumar [1 ]
Chowdhury, Sankalan Pal [1 ]
Jayendran, Aravind [1 ,2 ]
Singla, Parag [1 ]
Asnani, Himanshu [3 ]
Prathosh, A. P. [1 ]
机构
[1] IIT Delhi, Delhi, India
[2] Flipkart Internet Pvt Ltd, Bengaluru, Karnataka, India
[3] TIFR, Mumbai, Maharashtra, India
来源
CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI 2020) | 2020年 / 124卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The field of neural generative models is dominated by the highly successful Generative Adversarial Networks (GANs) despite their challenges, such as training instability and mode collapse. Auto-Encoders (AE) with regularized latent space provide an alternative framework for generative models, albeit their performance levels have not reached that of GANs. In this work, we hypothesise that the dimensionality of the AE model's latent space has a critical effect on the quality of generated data. Under the assumption that nature generates data by sampling from a "true" generative latent space followed by a deterministic function, we show that the optimal performance is obtained when the dimensionality of the latent space of the AE-model matches with that of the "true" generative latent space. Further, we propose an algorithm called the Mask Adversarial Auto-Encoder (MaskAAE), in which the dimensionality of the latent space of an adversarial auto encoder is brought closer to that of the "true" generative latent space, via a procedure to mask the spurious latent dimensions. We demonstrate through experiments on synthetic and several real-world datasets that the proposed formulation yields betterment in the generation quality.
引用
收藏
页码:689 / 698
页数:10
相关论文
共 50 条
[31]   Denoising Auto-encoders for Learning of Objects and Tools Affordances in Continuous Space [J].
Dehban, Atabak ;
Jamone, Lorenzo ;
Kampff, Adam R. ;
Santos-Victor, Jose .
2016 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2016, :4866-4871
[32]   Enhancing Domain Generalization with Auto-encoders [J].
Goerttler, Thomas ;
Obermayer, Klaus .
INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 1, INTELLISYS 2024, 2024, 1065 :486-495
[33]   Scoring and Classifying with Gated Auto-Encoders [J].
Im, Daniel Jiwoong ;
Taylor, Graham W. .
MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2015, PT I, 2015, 9284 :533-545
[34]   Consistency Regularization for Variational Auto-Encoders [J].
Sinha, Samarth ;
Dieng, Adji B. .
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
[35]   An Experimental Study on Hyper-parameter Optimization for Stacked Auto-Encoders [J].
Sun, Yanan ;
Xue, Bing ;
Zhang, Mengjie ;
Yen, Gary G. .
2018 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC), 2018, :638-645
[36]   Association Rules Mining with Auto-encoders [J].
Berteloot, Theophile ;
Khoury, Richard ;
Durand, Audrey .
INTELLIGENT DATA ENGINEERING AND AUTOMATED LEARNING - IDEAL 2024, PT I, 2025, 15346 :51-62
[37]   BRAIN SUBTLE ANOMALY DETECTION BASED ON AUTO-ENCODERS LATENT SPACE ANALYSIS: APPLICATION TO DE NOVO PARKINSON PATIENTS [J].
Pinon, Nicolas ;
Oudoumanessah, Geoffroy ;
Trombetta, Robin ;
Dojat, Michel ;
Forbes, Florence ;
Lartizien, Carole .
2023 IEEE 20TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI, 2023,
[38]   An Efficient Antenna Decoupling Optimization Method Using Variational Auto-Encoders [J].
Huang, Hao ;
Yang, Xue-Song .
2024 IEEE INTERNATIONAL SYMPOSIUM ON ANTENNAS AND PROPAGATION AND INC/USNCURSI RADIO SCIENCE MEETING, AP-S/INC-USNC-URSI 2024, 2024, :2007-2008
[39]   Adversarial Training of Variational Auto-encoders for Continual Zero-shot Learning(A-CZSL) [J].
Ghosh, Subhankar .
2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
[40]   Radon-Sobolev Variational Auto-Encoders [J].
Turinici, Gabriel .
NEURAL NETWORKS, 2021, 141 :294-305