Rich Punctuations Prediction Using Large-scale Deep Learning

被引:0
作者
Wu, Xueyang [1 ]
Zhu, Su [1 ]
Wu, Yue [1 ]
Yu, Kai [1 ]
机构
[1] Shanghai Jiao Tong Univ, Key Lab Shanghai Educ Commiss Intelligent Interac, Brain Sci & Technol Res Ctr, SpeechLab,Dept Comp Sci & Engn, Shanghai, Peoples R China
来源
2016 10TH INTERNATIONAL SYMPOSIUM ON CHINESE SPOKEN LANGUAGE PROCESSING (ISCSLP) | 2016年
关键词
deep learning; neural networks; punctuation prediction; large-scale;
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Punctuation plays an important role in language processing. However, automatic speech recognition systems only output plain word sequences. It is then of interest to predict punctuations on plain word sequences. Previous works have focused on using lexical features or prosodic cues captured from small corpus to predict simple punctuations. Compared with simple punctuations, rich punctuations provide more meaningful information and are more difficult to predict. In this paper, a multi-view LSTM model is proposed to predict rich punctuations on large-scale corpora. In particular, predictions on both in-domain and out-of-domain datasets are investigated. Experiments showed that LSTM can significantly outperform the traditional CRF-based model. Moreover, large-scale corpora are proved to bring large progress, and introducing POS tags and Chunking information in a multi-view structure improves performance of LSTM model on small corpus.
引用
收藏
页数:5
相关论文
共 16 条
[11]  
Tjong Erik F., 2000, Proceedings of CoNLL-2000 and LLL-2000
[12]  
Tobergte DavidR., 2013, J CHEM INF MODEL, V53, P1689, DOI DOI 10.1017/CBO9781107415324.004
[13]   Feature-rich part-of-speech tagging with a cyclic dependency network [J].
Toutanova, K ;
Klein, D ;
Manning, CD ;
Singer, Y .
HLT-NAACL 2003: HUMAN LANGUAGE TECHNOLOGY CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, PROCEEDINGS OF THE MAIN CONFERENCE, 2003, :252-259
[14]  
Xu C., 2015, CVPR, V36
[15]  
Zhao Y., 2012, Proc. Pacific Asia Conference on Language, Information, P508
[16]  
Zhu Muhua, 2013, Long Papers, V1, P434