Unsupervised statistical text simplification using pre-trained language modeling for initialization

被引:0
作者
QIANG Jipeng [1 ]
ZHANG Feng [1 ]
LI Yun [1 ]
YUAN Yunhao [1 ]
ZHU Yi [1 ]
WU Xindong [2 ,3 ]
机构
[1] Department of Computer Science, Yangzhou University, Yangzhou , China
[2] Key Laboratory of Knowledge Engineering with Big Data (Hefei University of Technology), Ministry of Education, Hefei , China
[3] Mininglamp Academy of Sciences, Mininglamp, Beijing ,
关键词
text simplification; pre-trained language modeling; BERT; word embeddings;
D O I
暂无
中图分类号
TP391.1 [文字信息处理];
学科分类号
摘要
Unsupervised text simplification has attracted much attention due to the scarcity of high-quality parallel text simplification corpora. Recent an unsupervised statistical text simplification based on phrase-based machine translation system (UnsupPBMT) achieved good performance, which initializes the phrase tables using the similar words obtained by word embedding modeling. Since word embedding modeling only considers the relevance between words, the phrase table in UnsupPBMT contains a lot of dissimilar words. In this paper, we propose an unsupervised statistical text simplification using pre-trained language modeling BERT for initialization. Specifically, we use BERT as a general linguistic knowledge base for predicting similar words. Experimental results show that our method outperforms the state-of-the-art unsupervised text simplification methods on three benchmarks, even outperforms some supervised baselines.
引用
收藏
相关论文
empty
未找到相关数据