Empirical Evaluation of Variational Autoencoders for Data Augmentation

被引:13
作者
Jorge, Javier [1 ]
Vieco, Jesus [1 ]
Paredes, Roberto [1 ]
Andreu Sanchez, Joan [1 ]
Miguel Benedi, Jose [1 ]
机构
[1] Univ Politecn Valencia, Dept Sistemas Informat & Computat, Valencia, Spain
来源
PROCEEDINGS OF THE 13TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISIGRAPP 2018), VOL 5: VISAPP | 2018年
关键词
Generative Models; Data Augmentation; Variational Autoencoder;
D O I
10.5220/0006618600960104
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Since the beginning of Neural Networks, different mechanisms have been required to provide a sufficient number of examples to avoid overfitting. Data augmentation, the most common one, is focused on the generation of new instances performing different distortions in the real samples. Usually, these transformations are problem-dependent, and they result in a synthetic set of, likely, unseen examples. In this work, we have studied a generative model, based on the paradigm of encoder-decoder, that works directly in the data space, that is, with images. This model encodes the input in a latent space where different transformations will be applied. After completing this, we can reconstruct the latent vectors to get new samples. We have analysed various procedures according to the distortions that we could carry out, as well as the effectiveness of this process to improve the accuracy of different classification systems. To do this, we could use both the latent space and the original space after reconstructing the altered version of these vectors. Our results have shown that using this pipeline (encoding-altering-decoding) helps the generalisation of the classifiers that have been selected.
引用
收藏
页码:96 / 104
页数:9
相关论文
共 14 条
[1]  
[Anonymous], 2017, P IEEE C COMP VIS PA
[2]  
[Anonymous], 1985, PARALLEL DISTRIBUTED
[3]  
[Anonymous], 2013, P INT C MACHINE LEAR
[4]  
[Anonymous], TRACK 2014 PROC 2 IN
[5]   SMOTE: Synthetic minority over-sampling technique [J].
Chawla, Nitesh V. ;
Bowyer, Kevin W. ;
Hall, Lawrence O. ;
Kegelmeyer, W. Philip .
2002, American Association for Artificial Intelligence (16)
[6]  
DeVries T., 2017, ARXIV170205538
[7]   Generative Adversarial Networks [J].
Goodfellow, Ian ;
Pouget-Abadie, Jean ;
Mirza, Mehdi ;
Xu, Bing ;
Warde-Farley, David ;
Ozair, Sherjil ;
Courville, Aaron ;
Bengio, Yoshua .
COMMUNICATIONS OF THE ACM, 2020, 63 (11) :139-144
[8]   Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels [J].
Han, Bo ;
Yao, Quanming ;
Yu, Xingrui ;
Niu, Gang ;
Xu, Miao ;
Hu, Weihua ;
Tsang, Ivor W. ;
Sugiyama, Masashi .
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
[9]   Virtualized Sub-GHz Transmission Paired with Mobile Access for the Tactile Internet [J].
Holland, Oliver ;
Wong, Stan ;
Friderikos, Vasilis ;
Qin, Zhijin ;
Gao, Yue .
2016 23RD INTERNATIONAL CONFERENCE ON TELECOMMUNICATIONS (ICT), 2016,
[10]   ImageNet Classification with Deep Convolutional Neural Networks [J].
Krizhevsky, Alex ;
Sutskever, Ilya ;
Hinton, Geoffrey E. .
COMMUNICATIONS OF THE ACM, 2017, 60 (06) :84-90