An Empirical Analysis of Generative Adversarial Network Training Times with Varying Batch

被引:0
|
作者
Ghosh, Bhaskar [1 ]
Dutta, Indira Kalyan [1 ]
Carlson, Albert
Totaro, Michael [1 ]
Bayoumi, Magdy [1 ]
机构
[1] Univ Louisiana Lafayette, Lafayette, LA 70504 USA
来源
2020 11TH IEEE ANNUAL UBIQUITOUS COMPUTING, ELECTRONICS & MOBILE COMMUNICATION CONFERENCE (UEMCON) | 2020年
关键词
Generative Adversarial Networks; Training; Hyper-parameter; Neural Networks; Artificial Intelligence;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Increasing the performance of a Generative Adversarial Network (GAN) requires experimentation in choosing the suitable training hyper-parameters of learning rate and batch size. There is no consensus on learning rates or batch sizes in GANs, which makes it a "trial-and-error" process to get acceptable output. Researchers have differing views regarding the effect of batch sizes on run time. This paper investigates the impact of these training parameters of GANs with respect to actual elapsed training time. In our initial experiments, we study the effects of batch sizes, learning rates, loss function, and optimization algorithm on training using the MNIST dataset over 30,000 epochs. The simplicity of the MNIST dataset allows for a starting point in initial studies to understand if the parameter changes have any significant impact on the training times. The goal is to analyze and understand the results of varying loss functions, batch sizes, optimizer algorithms, and learning rates on GANs and address the key issue of batch size and learning rate selection.
引用
收藏
页码:643 / 648
页数:6
相关论文
共 50 条
  • [1] Spatial Coevolution for Generative Adversarial Network Training
    Hemberg E.
    Toutouh J.
    Al-Dujaili A.
    Schmiedlechner T.
    O'Reilly U.-M.
    ACM Transactions on Evolutionary Learning and Optimization, 2021, 1 (02):
  • [2] Collaborative-GAN: An Approach for Stabilizing the Training Process of Generative Adversarial Network
    Megahed, Mohammed
    Mohammed, Ammar
    IEEE ACCESS, 2024, 12 : 138716 - 138735
  • [3] ARTIFICIAL BANDWIDTH EXTENSION USING A CONDITIONAL GENERATIVE ADVERSARIAL NETWORK WITH DISCRIMINATIVE TRAINING
    Sautter, Jonas
    Faubel, Friedrich
    Buck, Markus
    Schmidt, Gerhard
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 7005 - 7009
  • [4] A Reconfigurable Accelerator for Generative Adversarial Network Training Based on FPGA
    Yin, Tongtong
    Mao, Wendong
    Lu, Jinming
    Wang, Zhongfeng
    2021 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI 2021), 2021, : 144 - 149
  • [5] Training Generative Adversarial Networks via Stochastic Nash Games
    Franci, Barbara
    Grammatico, Sergio
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (03) : 1319 - 1328
  • [6] HGAN: Hyperbolic Generative Adversarial Network
    Lazcano, Diego
    Franco, Nicolas Fredes
    Creixell, Werner
    IEEE ACCESS, 2021, 9 : 96309 - 96320
  • [7] BC-GAN: A Generative Adversarial Network for Synthesizing a Batch of Collocated Clothing
    Zhou, Dongliang
    Zhang, Haijun
    Ma, Jianghong
    Shi, Jianyang
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (05) : 3245 - 3259
  • [8] Training bidirectional generative adversarial networks with hints
    Mutlu, Uras
    Alpaydin, Ethem
    PATTERN RECOGNITION, 2020, 103
  • [9] Exploring generative adversarial networks and adversarial training
    Sajeeda A.
    Hossain B.M.M.
    Int. J. Cogn. Comp. Eng., (78-89): : 78 - 89
  • [10] Adversarial Training Time Attack Against Discriminative and Generative Convolutional Models
    Chaudhury, Subhajit
    Roy, Hiya
    Mishra, Sourav
    Yamasaki, Toshihiko
    IEEE ACCESS, 2021, 9 : 109241 - 109259