Loss Functions of Generative Adversarial Networks (GANs): Opportunities and Challenges

被引:52
作者
Pan, Zhaoqing [1 ,2 ]
Yu, Weijie [1 ]
Wang, Bosi [1 ]
Xie, Haoran [3 ]
Sheng, Victor S. [4 ]
Lei, Jianjun [5 ]
Kwong, Sam [6 ]
机构
[1] Nanjing Univ Informat Sci & Technol, Sch Comp & Software, Nanjing 210044, Peoples R China
[2] Xidian Univ, State Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
[3] Lingnan Univ, Dept Comp & Decis Sci, Hong Kong, Peoples R China
[4] Texas Tech Univ, Dept Comp Sci, Lubbock, TX 79409 USA
[5] Tianjin Univ, Sch Elect & Informat Engn, Tianjin 300072, Peoples R China
[6] City Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
来源
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE | 2020年 / 4卷 / 04期
关键词
Loss functions; generative adversarial networks (GANs); deep learning; machine learning; computational intelligence; IMAGE; DISTANCE;
D O I
10.1109/TETCI.2020.2991774
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, the Generative Adversarial Networks (GANs) are fast becoming a key promising research direction in computational intelligence. To improve the modeling ability of GANs, loss functions are used to measure the differences between samples generated by the model and real samples, and make the model learn towards the goal. In this paper, we perform a survey for the loss functions used in GANs, and analyze the pros and cons of these loss functions. Firstly, the basic theory of GANs, and its training mechanism are introduced. Then, the loss functions used in GANs are summarized, including not only the objective functions of GANs, but also the application-oriented GANs' loss functions. Thirdly, the experiments and analyses of representative loss functions are discussed. Finally, several suggestions on how to choose appropriate loss functions in a specific task are given.
引用
收藏
页码:500 / 522
页数:23
相关论文
共 95 条
[61]   Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities [J].
Qi, Guo-Jun .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2020, 128 (05) :1118-1140
[62]  
Rahnama A., 2019, ARXIV191104636
[63]   NONLINEAR TOTAL VARIATION BASED NOISE REMOVAL ALGORITHMS [J].
RUDIN, LI ;
OSHER, S ;
FATEMI, E .
PHYSICA D, 1992, 60 (1-4) :259-268
[64]   EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis [J].
Sajjadi, Mehdi S. M. ;
Schoelkopf, Bernhard ;
Hirsch, Michael .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :4501-4510
[65]  
Salimans T, 2016, ADV NEUR IN, V29
[66]   SinGAN: Learning a Generative Model from a Single Natural Image [J].
Shaham, Tamar Rott ;
Dekel, Tali ;
Michaeli, Tomer .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :4569-4579
[67]   Android-GAN: Defending against android pattern attacks using multi-modal generative network as anomaly detector [J].
Shin, Sang-Yun ;
Kang, Yong-Won ;
Kim, Yong-Guk .
EXPERT SYSTEMS WITH APPLICATIONS, 2020, 141
[68]  
Song J., 2020, ARXIV200209847
[69]  
Sriperumbudur B. K., 2009, ARXIV PREPRINT ARXIV
[70]  
Sun, 2017, ARXIV, P3155