Exploring generative adversarial networks and adversarial training

被引:0
作者
Sajeeda A. [1 ]
Hossain B.M.M. [1 ]
机构
[1] Institute of Information Technology, University of Dhaka, Dhaka
来源
Int. J. Cogn. Comp. Eng. | / 78-89期
关键词
Adversarial training; Deep learning; GANs; Generative adversarial networks; Generative modeling;
D O I
10.1016/j.ijcce.2022.03.002
中图分类号
学科分类号
摘要
Recognized as a realistic image generator, Generative Adversarial Network (GAN) occupies a progressive section in deep learning. Using generative modeling, the underlying generator model learns the real target distribution and outputs fake samples from the generated replica distribution. The discriminator attempts to distinguish the fake and the real samples and sends feedback to the generator so that the generator can improve the fake samples. Recently, GANs have been competing with the state-of-the-art in various tasks including image processing, missing data imputation, text-to-image translation and adversarial example generation. However, the architecture suffers from training instability, resulting in problems like non-convergence, mode collapse and vanishing gradients. The research community has been studying and devising modified architectures, alternative loss functions and techniques to address these concerns. A section of publications has studied Adversarial Training, alongside GANs. This review covers the existing works on the instability of GANs from square one and a portion of recent publications to illustrate the trend of research. It also gives insight on studies exploring adversarial attacks and research discussing Adversarial Attacks with GANs. To put it more eloquently, this study intends to guide researchers interested in studying improvisations made to GANs for stable training, in the presence of Adversarial Attacks. © 2022
引用
收藏
页码:78 / 89
页数:11
相关论文
共 109 条
[1]  
Arjovsky M., Bottou L., Towards principled methods for training generative adversarial networks, 5th International conference on learning representations, ICLR 2017, Toulon, France, April 24–26, 2017, conference track proceedings, (2017)
[2]  
Arjovsky M., Chintala S., Bottou L., Wasserstein GAN, CoRR, (2017)
[3]  
Arora S., Ge R., Liang Y., Ma T., Zhang Y., Generalization and equilibrium in generative adversarial nets (GANs), Proceedings of the 34th international conference on machine learning, ICML 2017, Sydney, Nsw, Australia, 6–11 August 2017, Proceedings of machine learning research, 70, pp. 224-232, (2017)
[4]  
Athalye A., Carlini N., Wagner D.A., Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples, Proceedings of the 35th international conference on machine learning, ICML 2018, stockholmsmässan, Stockholm, Sweden, July 10–15, 2018, Proceedings of machine learning research, 80, pp. 274-283, (2018)
[5]  
Bai T., Zhao J., Zhu J., Han S., Chen J., Li B., Kot A.C., AI-GAN: Attack-inspired generation of adversarial examples, 2021 IEEE international conference on image processing, ICIP 2021, Anchorage, Ak, Usa, September 19–22, 2021, pp. 2543-2547, (2021)
[6]  
Bang D., Shim H., Improved training of generative adversarial networks using representative features, Proceedings of the 35th international conference on machine learning, Proceedings of machine learning research, 80, pp. 433-442, (2018)
[7]  
Berthelot D., Schumm T., Metz L., BEGAN: Boundary equilibrium generative adversarial networks, CoRR, (2017)
[8]  
Biggio B., Nelson B., Laskov P., Support vector machines under adversarial label noise, Journal of Machine Learning Research - Proceedings Track, 20, pp. 97-112, (2011)
[9]  
Biggio B., Nelson B., Laskov P., (2012)
[10]  
Brock A., Donahue J., Simonyan K., Large scale GAN training for high fidelity natural image synthesis, CoRR, (2018)