Context-aware RNNLM Rescoring for Conversational Speech Recognition

被引:1
作者
Wei, Kun [1 ]
Guo, Pengcheng [1 ]
Lv, Hang [1 ]
Tu, Zhen [2 ]
Xie, Lei [1 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci, Audio Speech & Language Proc Grp ASLP NPU, Xian, Peoples R China
[2] Zhuiyi Technol, Shenzhen, Peoples R China
来源
2021 12TH INTERNATIONAL SYMPOSIUM ON CHINESE SPOKEN LANGUAGE PROCESSING (ISCSLP) | 2021年
关键词
conversational speech recognition; recurrent neural network language model; lattice-rescoring; LANGUAGE MODEL ADAPTATION;
D O I
10.1109/ISCSLP49672.2021.9362109
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Conversational speech recognition is regarded as a challenging task due to its free-style speaking and long-term contextual dependencies. Prior work has explored the modeling of long-range context through RNNLM rescoring with improved performance. To further take advantage of the persisted nature during a conversation, such as topics or speaker turn, we extend the rescoring procedure to a new context-aware manner. For RNNLM training, we capture the contextual dependencies by concatenating adjacent sentences with various tag words, such as speaker or intention information. For lattice rescoring, the lattice of adjacent sentences are also connected with the first-pass decoded result by tag words. Besides, we also adopt a selective concatenation strategy based on tf-idf, making the best use of contextual similarity to improve transcription performance. Results on four different conversation test sets show that our approach yields up to 13.1% and 6% relative char-error-rate (CER) reduction compared with 1st-pass decoding and common lattice-rescoring, respectively.
引用
收藏
页数:5
相关论文
共 29 条
  • [1] Aleksic P, 2015, 16TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2015), VOLS 1-5, P468
  • [2] [Anonymous], P IEEE WORKSH AUT SP
  • [3] Besling S., 1995, 4 EUR C SPEECH COMM 4 EUR C SPEECH COMM
  • [4] Investigating Bidirectional Recurrent Neural Network Language Models for Speech Recognition
    Chen, X.
    Ragni, A.
    Liu, X.
    Gales, M. J. F.
    [J]. 18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, : 269 - 273
  • [5] Chen X, 2017, 2017 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU), P97, DOI 10.1109/ASRU.2017.8268922
  • [6] Clarkson PR, 1997, INT CONF ACOUST SPEE, P799, DOI 10.1109/ICASSP.1997.596049
  • [7] Godfrey J. J., 1992, ICASSP-92: 1992 IEEE International Conference on Acoustics, Speech and Signal Processing (Cat. No.92CH3103-9), P517, DOI 10.1109/ICASSP.1992.225858
  • [8] Grave E, 2016, ARXIV PREPRINT ARXIV
  • [9] Han, 2017, ARXIV PREPRINT ARXIV
  • [10] He Tian, 2016, 2016 74th Annual Device Research Conference (DRC), P1, DOI 10.1109/DRC.2016.7548472