DFS-GAN: stabilizing training of generative adversarial networks through discarding fake samples

被引:1
作者
Yang, Lianping [1 ]
Sun, Hao [1 ]
Zhang, Jian [1 ]
Mo, Sijia [1 ]
Jiang, Wuming [2 ]
Zhang, Xiangde [1 ]
机构
[1] Northeastern Univ, Coll Sci, Shenyang, Peoples R China
[2] Beijing EyeCool Technol Co Ltd, Beijing, Peoples R China
关键词
generative adversarial network; stabilized training; generated samples; IMAGE SYNTHESIS;
D O I
10.1117/1.JEI.31.6.063016
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Generative adversarial networks (GANs) are generative models based on game theory. Because the relationship between generator and discriminator must be carefully adjusted during the training process, it is difficult to get stable training. Although some solutions are proposed to alleviate this issue, it is still necessary to discuss how to improve the stability of GANs. We propose a GAN we call the discarding fake samples (DFS)-GAN. During the training process, some generated samples are unable to fool the discriminator and provide a relatively invalid gradient for the discriminator. So, in the stabilized discriminator module (SDM), we discard the fake but easily discriminated samples. At the same time, we propose a new loss function, SGAN-gradient penalty 1. We explain the rationale of SDM and our loss function from a Bayesian decision perspective. We inferred the best number of discarded fake samples and verified the selected parameters' effectiveness by experiments. The Frechet inception distance (FID) value of DFS-GAN is 14.57 +/- 0.19 on Canadian Institute for Advanced Research-10 (CIFAR-10), 20.87 +/- 0.33 on CIFAR-100, and 92.42 +/- 0.43 on ImageNet, which is lower than that of the current optimal method. Moreover, SDM module can be used in many GANs to decrease the FID value if their loss functions fit. (c) 2022 SPIE and IS&T
引用
收藏
页数:21
相关论文
共 33 条
[11]   Image-to-Image Translation with Conditional Adversarial Networks [J].
Isola, Phillip ;
Zhu, Jun-Yan ;
Zhou, Tinghui ;
Efros, Alexei A. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :5967-5976
[12]  
Jiwoong Im D., 2016, arXiv
[13]  
Jolicoeur-Martineau Alexia, 2019, P INT C LEARNING REP
[14]  
Karras T., 2020, IEEE T PATTERN ANAL, V43, P4217, DOI DOI 10.1109/TPAMI.2020.2970919
[15]  
Karras T, 2017, ARXIV
[16]   ImageNet Classification with Deep Convolutional Neural Networks [J].
Krizhevsky, Alex ;
Sutskever, Ilya ;
Hinton, Geoffrey E. .
COMMUNICATIONS OF THE ACM, 2017, 60 (06) :84-90
[17]  
Li CX, 2017, ADV NEUR IN, V30
[18]  
Liu M., 2016, P INT C NEUR INF PRO, P469
[19]   Least Squares Generative Adversarial Networks [J].
Mao, Xudong ;
Li, Qing ;
Xie, Haoran ;
Lau, Raymond Y. K. ;
Wang, Zhen ;
Smolley, Stephen Paul .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :2813-2821
[20]  
Mescheder L, 2018, PR MACH LEARN RES, V80