Neural Data-to-Text Generation with LM-based Text Augmentation

被引:0
作者
Chang, Ernie [1 ]
Shen, Xiaoyu [2 ]
Zhu, Dawei [1 ]
Demberg, Vera [1 ]
Su, Hui [3 ]
机构
[1] Saarland Univ, Dept Language Sci & Technol, Saarbrucken, Germany
[2] Amazon Alexa AI, Berlin, Germany
[3] Tencent Inc, Pattern Recognit Ctr, Wechat AI, Shenzhen, Peoples R China
来源
16TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EACL 2021) | 2021年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
For many new application domains for data-to-text generation, the main obstacle in training neural models consists of a lack of training data. While usually large numbers of instances are available on the data side, often only very few text samples are available. To address this problem, we here propose a novel few-shot approach for this setting. Our approach automatically augments the data available for training by (i) generating new text samples based on replacing specific values by alternative ones from the same category, (ii) generating new text samples based on GPT-2, and (iii) proposing an automatic method for pairing the new text samples with data samples. As the text augmentation can introduce noise to the training data, we use cycle consistency as an objective, in order to make sure that a given data sample can be correctly reconstructed after having been formulated as text (and that text samples can be reconstructed from data). On both the E2E and WebNLG benchmarks, we show that this weakly supervised training paradigm is able to outperform fully supervised seq2seq models with less than 10% annotations. By utilizing all annotated data, our model can boost the performance of a standard seq2seq model by over 5 BLEU points, establishing a new state-of-the-art on both datasets.
引用
收藏
页码:758 / 768
页数:11
相关论文
共 48 条
[1]  
Agarwal Shubham, 2018, Proceedings of the 11th International Conference on Natural Language Generation, P451
[2]  
[Anonymous], 2017, PROC 2 C MACH TRANSL
[3]  
[Anonymous], 2000, Studies in Natural Language Processing
[4]  
Artetxe M, 2018, Arxiv, DOI [arXiv:1710.11041, 10.48550/arXiv.1710.11041, DOI 10.48550/ARXIV.1710.11041]
[5]  
Artetxe M, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), P3197
[6]  
Auli, 2016, EMNLP, P1203, DOI DOI 10.18653/V1/D16-1128
[7]  
Barzilay R., 2005, P 43 ANN M ASS COMP, P141
[8]  
Baziotis C, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P673
[9]  
Budzianowski P., 2019, P 3 WORKSH NEUR GEN, DOI [10.18653/v1/d19-5602, DOI 10.18653/V1/D19-5602]
[10]  
Chang E, 2021, 16TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EACL 2021), P727