Stabilizing and Improving Training of Generative Adversarial Networks Through Identity Blocks and Modified Loss Function

被引:7
|
作者
Fathallah, Mohamed [1 ]
Sakr, Mohamed [2 ]
Eletriby, Sherif [2 ]
机构
[1] Kafrelsheikh Univ, Fac Comp & Informat, Dept Comp Sci, Kafrelsheikh 33516, Egypt
[2] Menoufia Univ, Fac Comp & Informat, Dept Comp Sci, Shibin Al Kawm 32511, Menoufia, Egypt
关键词
Training; Generators; Generative adversarial networks; Smoothing methods; Data models; Standards; Optimization; Generative adversarial network; deep learning; mode collapse; label smoothing; identity block;
D O I
10.1109/ACCESS.2023.3272032
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Generative adversarial networks (GANs) are a powerful tool for synthesizing realistic images, but they can be difficult to train and are prone to instability and mode collapse. This paper proposes a new model called Identity Generative Adversarial Network (IGAN) that addresses these issues. This model is based on three modifications to the baseline deep convolutional generative adversarial network (DCGAN). The first change is to add a non-linear identity block to the architecture. This will make it easier for the model to fit complex data types and cut down on the time it takes to train. The second change is to smooth out the standard GAN loss function by using a modified loss function and label smoothing. The third and final change is to use minibatch training to let the model use other examples from the same minibatch as side information to improve the quality and variety of generated images. These changes help to stabilize the training process and improve the model's performance. The performance of the GAN models is compared using the inception score (IS) and the Frechet inception distance (FID), which are widely used metrics for evaluating the quality and diversity of generated images. The effectiveness of our approach was tested by comparing an IGAN model with other GAN models on the CelebA and stacked MNIST datasets. Results show that IGAN outperforms all the other models, achieving an IS of 13.95 and an FID of 43.71 after traning for 200 epochs. In addition to demonstrating the improvement in the performance of the IGAN, the instabilities, diversity, and fidelity of the models were investigated. The results showed that the IGAN was able to converge to a distribution of the real data more quickly. Furthermore, the experiments revealed that IGAN is capable of producing more stable and high-quality images. This suggests that IGAN is a promising approach for improving the training and performance of GANs and may have a range of applications in image synthesis and other areas.
引用
收藏
页码:43276 / 43285
页数:10
相关论文
共 50 条
  • [21] Improving Satellite Image Fusion via Generative Adversarial Training
    Luo, Xin
    Tong, Xiaohua
    Hu, Zhongwen
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2021, 59 (08): : 6969 - 6982
  • [22] An improved generative adversarial network with modified loss function for crack detection in electromagnetic nondestructive testing
    Tian, Lulu
    Wang, Zidong
    Liu, Weibo
    Cheng, Yuhua
    Alsaadi, Fuad E.
    Liu, Xiaohui
    COMPLEX & INTELLIGENT SYSTEMS, 2022, 8 (01) : 467 - 476
  • [23] DHI-GAN: Improving Dental-Based Human Identification Using Generative Adversarial Networks
    Lin, Yi
    Fan, Fei
    Zhang, Jianwei
    Zhou, Jizhe
    Liao, Peixi
    Chen, Hu
    Deng, Zhenhua
    Zhang, Yi
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (12) : 9700 - 9712
  • [24] PL-GAN: Path Loss Prediction Using Generative Adversarial Networks
    Marey, Ahmed
    Bal, Mustafa
    Ates, Hasan F.
    Gunturk, Bahadir K.
    IEEE ACCESS, 2022, 10 : 90474 - 90480
  • [25] Improving generative adversarial networks for speech enhancement through regularization of latent representations
    Yang, Fan
    Wang, Ziteng
    Li, Junfeng
    Xia, Risheng
    Yan, Yonghong
    SPEECH COMMUNICATION, 2020, 118 (118) : 1 - 9
  • [26] Speech Loss Compensation by Generative Adversarial Networks
    Shi, Yupeng
    Zheng, Nengheng
    Kang, Yuyong
    Rong, Weicong
    2019 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2019, : 347 - 351
  • [27] An Adaptive Control Algorithm for Stable Training of Generative Adversarial Networks
    Ma, Xiaohan
    Jin, Rize
    Sohn, Kyung-Ah
    Paik, Joon-Young
    Chung, Tae-Sun
    IEEE ACCESS, 2019, 7 : 184103 - 184114
  • [28] A Spin Glass Model for the Loss Surfaces of Generative Adversarial Networks
    Nicholas P. Baskerville
    Jonathan P. Keating
    Francesco Mezzadri
    Joseph Najnudel
    Journal of Statistical Physics, 2022, 186
  • [29] Multiobjective coevolutionary training of Generative Adversarial Networks
    Ripa, Guillermo
    Mautone, Agustin
    Vidal, Andres
    Nesmachnow, Sergio
    Toutouh, Jamal
    PROCEEDINGS OF THE 2023 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION, GECCO 2023 COMPANION, 2023, : 319 - 322
  • [30] Modified generative adversarial networks for image classification
    Zhongtang Zhao
    Ruixian Li
    Evolutionary Intelligence, 2023, 16 : 1899 - 1906