Conditional BERT Contextual Augmentation

被引:193
作者
Wu, Xing [1 ,2 ]
Lv, Shangwen [1 ,2 ]
Zang, Liangjun [1 ]
Han, Jizhong [1 ,2 ]
Hu, Songlin [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Informat Engn, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Sch Cyber Secur, Beijing, Peoples R China
来源
COMPUTATIONAL SCIENCE - ICCS 2019, PT IV | 2019年 / 11539卷
基金
中国国家自然科学基金;
关键词
D O I
10.1007/978-3-030-22747-0_7
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Data augmentation methods are often applied to prevent overfitting and improve generalization of deep neural network models. Recently proposed contextual augmentation augments labeled sentences by randomly replacing words with more varied substitutions predicted by language model. Bidirectional Encoder Representations from Transformers (BERT) demonstrates that a deep bidirectional language model is more powerful than either an unidirectional language model or the shallow concatenation of a forward and backward model. We propose a novel data augmentation method for labeled sentences called conditional BERT contextual augmentation. We retrofit BERT to conditional BERT by introducing a new conditional masked language model (The term "conditional masked language model" appeared once in original BERT paper, which indicates context-conditional, is equivalent to term "masked language model". In our paper, "conditional masked language model" indicates we apply extra label-conditional constraint to the "masked language model".) task. The well trained conditional BERT can be applied to enhance contextual augmentation. Experiments on six various different text classification tasks show that our method can be easily applied to both convolutional or recurrent neural networks classifier to obtain improvement.
引用
收藏
页码:84 / 95
页数:12
相关论文
共 34 条
[1]  
[Anonymous], 2018, ARXIV180409000
[2]  
[Anonymous], 2017, ARXIV170500440
[3]  
[Anonymous], ARXIV170300955
[4]  
[Anonymous], 2015, PROCIEEE CONFCOMPUT, DOI DOI 10.1109/CVPR.2015.7298594
[5]  
Bowman S. R., 2015, ARXIV
[6]  
Dai A.M., 2015, SEMISUPERVISED SEQUE, P3079
[7]  
Deschacht K., 2009, SEMISUPERVISED SEMAN, P21
[8]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[9]  
Ebrahimi Javid, 2017, HotFlip: White-Box Adversarial Examples for Text Classification
[10]  
Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672