Composite Functional Gradient Learning of Generative Adversarial Models

被引:0
作者
Johnson, Rie [1 ]
Zhang, Tong [2 ]
机构
[1] RJ Res Consulting, Tarrytown, NY 10591 USA
[2] Tencent AI Lab, Shenzhen, Peoples R China
来源
INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80 | 2018年 / 80卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper first presents a theory for generative adversarial methods that does not rely on the traditional minimax formulation. It shows that with a strong discriminator, a good generator can be learned so that the KL divergence between the distributions of real data and generated data improves after each functional gradient step until it converges to zero. Based on the theory, we propose a new stable generative adversarial method. A theoretical insight into the original GAN from this new viewpoint is also provided. The experiments on image generation show the effectiveness of our new method.
引用
收藏
页数:9
相关论文
共 25 条
[1]  
[Anonymous], 2017, Unrolled generative adversarial networks
[2]  
[Anonymous], 2017, ARXIV161104076
[3]  
[Anonymous], 2011, NIPSW
[4]  
[Anonymous], 2017, ADV NEUR IN
[5]  
[Anonymous], 2014, ABS14064729 CORR
[6]  
Arjovsky M., 2017, PRINCIPLED METHODS T
[7]  
Arjovsky M, 2017, PR MACH LEARN RES, V70
[8]  
Berthelot D., 2017, CoRR
[9]  
Che T., 2017, P INT C LEARN REPR I
[10]   Greedy function approximation: A gradient boosting machine [J].
Friedman, JH .
ANNALS OF STATISTICS, 2001, 29 (05) :1189-1232