End-to-End Sinkhorn Autoencoder With Noise Generator

被引:7
作者
Deja, Kamil [1 ]
Dubinski, Jan [1 ]
Nowak, Piotr [2 ]
Wenzel, Sandro [3 ]
Spurek, Przemyslaw [4 ]
Trzcinski, Tomasz [1 ,5 ]
机构
[1] Warsaw Univ Technol, Fac Elect & Informat Technol, PL-661 Warsaw, Poland
[2] Warsaw Univ Technol, Fac Phys, PL-00661 Warsaw, Poland
[3] CERN, CH-1211 Geneva, Switzerland
[4] Jagiellonian Univ, Fac Math & Comp Sci, PL-31007 Krakow, Poland
[5] Tooploox, PL-53601 Wroclaw, Poland
关键词
Computer simulation; generative modeling; machine learning;
D O I
10.1109/ACCESS.2020.3048622
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this work, we propose a novel end-to-end Sinkhorn Autoencoder with a noise generator for efficient data collection simulation. Simulating processes that aim at collecting experimental data is crucial for multiple real-life applications, including nuclear medicine, astronomy, and high energy physics. Contemporary methods, such as Monte Carlo algorithms, provide high-fidelity results at a price of high computational cost. Multiple attempts are taken to reduce this burden, e.g. using generative approaches based on Generative Adversarial Networks or Variational Autoencoders. Although such methods are much faster, they are often unstable in training and do not allow sampling from an entire data distribution. To address these shortcomings, we introduce a novel method dubbed end-to-end Sinkhorn Autoencoder, that leverages the Sinkhorn algorithm to explicitly align distribution of encoded real data examples and generated noise. More precisely, we extend autoencoder architecture by adding a deterministic neural network trained to map noise from a known distribution onto autoencoder latent space representing data distribution. We optimise the entire model jointly. Our method outperforms co mpeting approaches on a challenging dataset of simulation data from Zero Degree Calorimeters of ALICE experiment in LHC. as well as standard benchmarks, such as MNIST and CelebA.
引用
收藏
页码:7211 / 7219
页数:9
相关论文
共 47 条
[1]  
[Anonymous], 2018, arXiv
[2]  
[Anonymous], 2015, ACS SYM SER
[3]  
Arjovsky M, 2017, PR MACH LEARN RES, V70
[4]  
Arora S, 2017, ARXIV170608224
[5]  
Arora S, 2017, PR MACH LEARN RES, V70
[6]   Regularizing Deep Neural Networks by Enhancing Diversity in Feature Extraction [J].
Ayinde, Babajide O. ;
Inanc, Tamer ;
Zurada, Jacek M. .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (09) :2650-2661
[7]  
Bojanowski Piotr, 2017, PROC INT C MACH LEAR
[8]  
Cuturi Marco, 2013, Advances in neural information processing systems
[9]  
Dai Bin, 2019, Master's Thesis
[10]  
Deja K., 2018, P C INF TECHN SYST R, P267