Stabilizing and Improving Training of Generative Adversarial Networks Through Identity Blocks and Modified Loss Function

被引:7
|
作者
Fathallah, Mohamed [1 ]
Sakr, Mohamed [2 ]
Eletriby, Sherif [2 ]
机构
[1] Kafrelsheikh Univ, Fac Comp & Informat, Dept Comp Sci, Kafrelsheikh 33516, Egypt
[2] Menoufia Univ, Fac Comp & Informat, Dept Comp Sci, Shibin Al Kawm 32511, Menoufia, Egypt
关键词
Training; Generators; Generative adversarial networks; Smoothing methods; Data models; Standards; Optimization; Generative adversarial network; deep learning; mode collapse; label smoothing; identity block;
D O I
10.1109/ACCESS.2023.3272032
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Generative adversarial networks (GANs) are a powerful tool for synthesizing realistic images, but they can be difficult to train and are prone to instability and mode collapse. This paper proposes a new model called Identity Generative Adversarial Network (IGAN) that addresses these issues. This model is based on three modifications to the baseline deep convolutional generative adversarial network (DCGAN). The first change is to add a non-linear identity block to the architecture. This will make it easier for the model to fit complex data types and cut down on the time it takes to train. The second change is to smooth out the standard GAN loss function by using a modified loss function and label smoothing. The third and final change is to use minibatch training to let the model use other examples from the same minibatch as side information to improve the quality and variety of generated images. These changes help to stabilize the training process and improve the model's performance. The performance of the GAN models is compared using the inception score (IS) and the Frechet inception distance (FID), which are widely used metrics for evaluating the quality and diversity of generated images. The effectiveness of our approach was tested by comparing an IGAN model with other GAN models on the CelebA and stacked MNIST datasets. Results show that IGAN outperforms all the other models, achieving an IS of 13.95 and an FID of 43.71 after traning for 200 epochs. In addition to demonstrating the improvement in the performance of the IGAN, the instabilities, diversity, and fidelity of the models were investigated. The results showed that the IGAN was able to converge to a distribution of the real data more quickly. Furthermore, the experiments revealed that IGAN is capable of producing more stable and high-quality images. This suggests that IGAN is a promising approach for improving the training and performance of GANs and may have a range of applications in image synthesis and other areas.
引用
收藏
页码:43276 / 43285
页数:10
相关论文
共 50 条
  • [31] Modified generative adversarial networks for image classification
    Zhao, Zhongtang
    Li, Ruixian
    EVOLUTIONARY INTELLIGENCE, 2023, 16 (06) : 1899 - 1906
  • [32] Cutout with patch-loss augmentation for improving generative adversarial networks against instability
    Shi, Mengchen
    Xie, Fei
    Yang, Jiquan
    Zhao, Jing
    Liu, Xixiang
    Wang, Fan
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2023, 234
  • [33] A Spin Glass Model for the Loss Surfaces of Generative Adversarial Networks
    Baskerville, Nicholas P.
    Keating, Jonathan P.
    Mezzadri, Francesco
    Najnudel, Joseph
    JOURNAL OF STATISTICAL PHYSICS, 2022, 186 (02)
  • [34] Semantic-aware deidentification generative adversarial networks for identity anonymization
    Hyeongbok Kim
    Zhiqi Pang
    Lingling Zhao
    Xiaohong Su
    Jin Suk Lee
    Multimedia Tools and Applications, 2023, 82 : 15535 - 15551
  • [35] Semantic-aware deidentification generative adversarial networks for identity anonymization
    Kim, Hyeongbok
    Pang, Zhiqi
    Zhao, Lingling
    Su, Xiaohong
    Lee, Jin Suk
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (10) : 15535 - 15551
  • [36] Fire resistance evaluation through synthetic fire tests and generative adversarial networks
    Ciftcioglu, Aybike Ozyuksel
    Naser, M. Z.
    FRONTIERS OF STRUCTURAL AND CIVIL ENGINEERING, 2024, 18 (04) : 587 - 614
  • [37] Subsampling Generative Adversarial Networks: Density Ratio Estimation in Feature Space With Softplus Loss
    Ding, Xin
    Wang, Z. Jane
    Welch, William J.
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2020, 68 : 1910 - 1922
  • [38] Improving Generative Adversarial Networks with Adaptive Control Learning
    Ma, Xiaohan
    Jin, Rize
    Sohn, Kyung-Ah
    Paik, JoonYoung
    Sun, Jing
    Chung, Tae-Sun
    2018 IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (IEEE VCIP), 2018,
  • [39] Improving Generative Adversarial Networks With Local Coordinate Coding
    Cao, Jiezhang
    Guo, Yong
    Wu, Qingyao
    Shen, Chunhua
    Huang, Junzhou
    Tan, Mingkui
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (01) : 211 - 227
  • [40] Improving the training performance of generative adversarial networks with limited data: Application to the generation of geological models
    Ranazzi, Paulo Henrique
    Luo, Xiaodong
    Sampaio, Marcio Augusto
    COMPUTERS & GEOSCIENCES, 2024, 193