A Cooperative Framework with Generative Adversarial Networks and Entropic Auto-Encoders for Text Generation

被引:0
作者
Liu, Zhiyue [1 ]
Wang, Jiahai [1 ]
机构
[1] Sun Yat Sen Univ, Sch Comp Sci & Engn, Guangzhou, Peoples R China
来源
2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2021年
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
D O I
10.1109/IJCNN52387.2021.9533782
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Generating text with high quality and sufficient diversity is a fundamental task in natural language generation. Although generative adversarial networks (GANs) achieve promising results in text generation, GAN-based language models suffer from mode collapse, i.e., the generator tends to sacrifice diversity and focus on limited text patterns with high quality. By contrast, maximum likelihood estimation (MLE) based language models could cover various text patterns and generate diversified samples with poor quality. This paper proposes a cooperative framework with GANs and entropic auto-encoders (EAEs), named GAN-EAE, to synthesize their advantages for text generation, where EAEs are powerful MLE-based generative models based on deterministic auto-encoders. By imitating the output distribution of EAEs, the generator shapes its output distribution closer to the real data distribution against mode collapse. Meanwhile, by learning the samples from the generator of GANs, EAEs subtly distribute probability mass on high quality patterns for improving generation quality. The similar samples obtained from the generator may raise mode collapse and should be downplayed during adversarial training. Thus, a sample re-weighting mechanism is adopted to improve diversity by measuring the inner distance of generated samples. Experimental results demonstrate that GAN-EAE could improve both GANs and EAEs to achieve state-of-the-art performance.
引用
收藏
页数:8
相关论文
共 35 条
[1]  
[Anonymous], 2020, AAAI
[2]  
[Anonymous], 2018, SIGIR
[3]  
[Anonymous], 2015, ARXIV151105101
[4]  
[Anonymous], 2019, ICLR
[5]  
[Anonymous], 2020, UAI
[6]  
[Anonymous], 2019, ACL
[7]  
[Anonymous], 2020, AAAI
[8]  
[Anonymous], 2016, INT C MACHINE LEARNI
[9]  
Arjovsky M, 2017, PR MACH LEARN RES, V70
[10]  
Bengio S, 2015, ADV NEUR IN, V28