Stabilizing and Improving Training of Generative Adversarial Networks Through Identity Blocks and Modified Loss Function

被引:7
|
作者
Fathallah, Mohamed [1 ]
Sakr, Mohamed [2 ]
Eletriby, Sherif [2 ]
机构
[1] Kafrelsheikh Univ, Fac Comp & Informat, Dept Comp Sci, Kafrelsheikh 33516, Egypt
[2] Menoufia Univ, Fac Comp & Informat, Dept Comp Sci, Shibin Al Kawm 32511, Menoufia, Egypt
关键词
Training; Generators; Generative adversarial networks; Smoothing methods; Data models; Standards; Optimization; Generative adversarial network; deep learning; mode collapse; label smoothing; identity block;
D O I
10.1109/ACCESS.2023.3272032
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Generative adversarial networks (GANs) are a powerful tool for synthesizing realistic images, but they can be difficult to train and are prone to instability and mode collapse. This paper proposes a new model called Identity Generative Adversarial Network (IGAN) that addresses these issues. This model is based on three modifications to the baseline deep convolutional generative adversarial network (DCGAN). The first change is to add a non-linear identity block to the architecture. This will make it easier for the model to fit complex data types and cut down on the time it takes to train. The second change is to smooth out the standard GAN loss function by using a modified loss function and label smoothing. The third and final change is to use minibatch training to let the model use other examples from the same minibatch as side information to improve the quality and variety of generated images. These changes help to stabilize the training process and improve the model's performance. The performance of the GAN models is compared using the inception score (IS) and the Frechet inception distance (FID), which are widely used metrics for evaluating the quality and diversity of generated images. The effectiveness of our approach was tested by comparing an IGAN model with other GAN models on the CelebA and stacked MNIST datasets. Results show that IGAN outperforms all the other models, achieving an IS of 13.95 and an FID of 43.71 after traning for 200 epochs. In addition to demonstrating the improvement in the performance of the IGAN, the instabilities, diversity, and fidelity of the models were investigated. The results showed that the IGAN was able to converge to a distribution of the real data more quickly. Furthermore, the experiments revealed that IGAN is capable of producing more stable and high-quality images. This suggests that IGAN is a promising approach for improving the training and performance of GANs and may have a range of applications in image synthesis and other areas.
引用
收藏
页码:43276 / 43285
页数:10
相关论文
共 50 条
  • [1] Collaborative-GAN: An Approach for Stabilizing the Training Process of Generative Adversarial Network
    Megahed, Mohammed
    Mohammed, Ammar
    IEEE ACCESS, 2024, 12 : 138716 - 138735
  • [2] Up and Down Residual Blocks for Convolutional Generative Adversarial Networks
    Wang, Yueyue
    Guo, Xinchang
    Liu, Peng
    Wei, Bin
    IEEE ACCESS, 2021, 9 : 26051 - 26058
  • [3] Interpretable Generative Adversarial Networks With Exponential Function
    She, Rui
    Fan, Pingyi
    Liu, Xiao-Yang
    Wang, Xiaodong
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 : 3854 - 3867
  • [4] A Unifying Generator Loss Function for Generative Adversarial Networks
    Veiner, Justin
    Alajaji, Fady
    Gharesifard, Bahman
    ENTROPY, 2024, 26 (04)
  • [5] Federated Training Generative Adversarial Networks for Heterogeneous Vehicle Scheduling in IoV
    Wu, Lizhao
    Lin, Hui
    Wang, Xiaoding
    IEEE INTERNET OF THINGS JOURNAL, 2025, 12 (05): : 4888 - 4898
  • [6] Exploring generative adversarial networks and adversarial training
    Sajeeda A.
    Hossain B.M.M.
    Int. J. Cogn. Comp. Eng., (78-89): : 78 - 89
  • [7] DFS-GAN: stabilizing training of generative adversarial networks through discarding fake samples
    Yang, Lianping
    Sun, Hao
    Zhang, Jian
    Mo, Sijia
    Jiang, Wuming
    Zhang, Xiangde
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (06)
  • [8] Stabilizing Training of Generative Adversarial Nets via Langevin Stein Variational Gradient Descent
    Wang, Dong
    Qin, Xiaoqian
    Song, Fengyi
    Cheng, Li
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (07) : 2768 - 2780
  • [9] The effect of loss function on conditional generative adversarial networks
    Abu-Srhan, Alaa
    Abushariah, Mohammad A. M.
    Al-Kadi, Omar S.
    JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2022, 34 (09) : 6977 - 6988
  • [10] Multivariate Generative Adversarial Networks and Their Loss Functions for Synthesis of Multichannel ECGs
    Brophy, Eoin
    De Vos, Maarten
    Boylan, Geraldine
    Ward, Tomas
    IEEE ACCESS, 2021, 9 : 158936 - 158945