Generative Adversarial Networks Based on Penalty of Conditional Entropy Distance

被引:0
作者
Tan H.-W. [1 ,2 ]
Wang G.-D. [1 ]
Zhou L.-Y. [2 ]
Zhang Z.-L. [1 ,3 ]
机构
[1] College of Computer and Information Science, Southwest University, Chongqing
[2] School of Mathematics and Statistics, Guizhou University of Finance and Economics, Guiyang
[3] School of Information Technology, Deakin University, Locked Bag 20000, Geelong, 3220, VIC
来源
Zhang, Zi-Li (zhangzl@swu.edu.cn) | 1600年 / Chinese Academy of Sciences卷 / 32期
基金
中国国家自然科学基金;
关键词
Conditional entropy distance; Generative adversarial networks; Image generation; Network structure; Sample diversity;
D O I
10.13328/j.cnki.jos.006156
中图分类号
学科分类号
摘要
Generating high-quality samples is always one of the main challenges in generative adversarial networks (GANs) field. To this end, in this study, a GANs penalty algorithm is proposed, which leverages a constructed conditional entropy distance to penalize its generator. Under the condition of keeping the entropy invariant, the algorithm makes the generated distribution as close to the target distribution as possible and greatly improves the quality of the generated samples. In addition, to improve the training efficiency of GANs, the network structure of GANs is optimized and the initialization strategy of the two networks is changed. The experimental results on several datasets show that the penalty algorithm significantly improves the quality of generated samples. Especially, on the CIFAR10, STL10, and CelebA datasets, the best FID value is reduced from 16.19, 14.10, 4.65 to 14.02, 12.83, and 3.22, respectively. © Copyright 2021, Institute of Software, the Chinese Academy of Sciences. All rights reserved.
引用
收藏
页码:1116 / 1128
页数:12
相关论文
共 44 条
[1]  
Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y., Generative adversarial nets, Advances in Neural Information Processing Systems 27, pp. 2672-2680, (2014)
[2]  
Radford A, Metz L, Chintala S., Unsupervised representation learning with deep convolutional generative adversarial networks, Proc. of the 4th Int'l Conf. on Learning Representations (ICLR), (2016)
[3]  
Mao XD, Li Q, Xie HR, Lau RK, Wang Z, Smolley SP., Least squares generative adversarial networks, Proc. of the 2017 IEEE Int'l Conf. on Computer Vision, pp. 2794-2802, (2017)
[4]  
Karras T, Aila T, Laine S, Lehtinen J., Progressive growing of GANs for improved quality, stability, and variation, Proc. of the 6th Int'l Conf. on Learning Representations (ICLR), (2018)
[5]  
Lai WS, Huang JB, Ahuja N, Yang MH., Deep Laplacian pyramid networks for fast and accurate super-resolution, Proc. of the 30th IEEE Conf. on Computer Vision and Pattern Recognition, pp. 624-632, (2017)
[6]  
Brock A, Donahua J, Simonyan K., Large scale GAN training for high fidelity natural image synthesis, Proc. of the 7th Int'l Conf. on Learning Representations (ICLR), (2019)
[7]  
Samson L, Noord NV, Booij O, Hofmann M, Gavves E, Ghafoorian M., I bet you are wrong: Gambling adversarial networks for structured semantic segmentation, Proc. of the 2019 IEEE Int'l Conf. on Computer Vision, pp. 3700-3708, (2019)
[8]  
Cao YJ, Jia LL, Chen YX, Lin N, Yang C, Zhang B, Liu Z, Li XX, Dai HH., Recent advances of generative adversarial networks in computer vision, IEEE Access, 7, pp. 14985-15006, (2018)
[9]  
Kingma DP, Ba JL., Adam: A method for stochastic optimization, Proc. of the 3rd Int'l Conf. on Learning Representations (ICLR), (2015)
[10]  
Ioffe S, Szegedy C., Batch normalization: Accelerating deep network training by reducing internal covariate shift, Proc. of the Machine Learning Research (PMLR), 37, pp. 448-456, (2015)