End-to-end Speech-to-Punctuated-Text Recognition

被引:2
作者
Nozaki, Jumon [1 ]
Kawahara, Tatsuya [1 ]
Ishizuka, Kenkichi [2 ]
Hashimoto, Taiichi
机构
[1] Kyoto Univ, Grad Sch Informat, Kyoto, Japan
[2] RevComm Inc, Tokyo, Japan
来源
INTERSPEECH 2022 | 2022年
关键词
speech recognition; punctuation prediction; connectionist temporal classification; transformer; CAPITALIZATION;
D O I
10.21437/Interspeech.2022-5
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Conventional automatic speech recognition systems do not produce punctuation marks which are important for the readability of the speech recognition results. They are also needed for subsequent natural language processing tasks such as machine translation. There have been a lot of works on punctuation prediction models that insert punctuation marks into speech recognition results as post-processing. However, these studies do not utilize acoustic information for punctuation prediction and are directly affected by speech recognition errors. In this study, we propose an end-to-end model that takes speech as input and outputs punctuated texts. This model is expected to predict punctuation robustly against speech recognition errors while using acoustic information. We also propose to incorporate an auxiliary loss to train the model using the output of the intermediate layer and unpunctuated texts. Through experiments, we compare the performance of the proposed model to that of a cascaded system. The proposed model achieves higher punctuation prediction accuracy than the cascaded system without sacrificing the speech recognition error rate. It is also demonstrated that the multi-task learning using the intermediate output against the unpunctuated text is effective. Moreover, the proposed model has only about 1/7th of the parameters compared to the cascaded system.
引用
收藏
页码:1811 / 1815
页数:5
相关论文
共 36 条
  • [1] Akita Y, 2011, INTERSPEECH, P2889
  • [2] Akita Y., 2006, P INT 2006, P1370
  • [3] Baevski A., 2020, wav2vec 2.0: A Framework for SelfSupervised Learning of Speech Representations
  • [4] Bandanau D, 2016, INT CONF ACOUST SPEE, P4945, DOI 10.1109/ICASSP.2016.7472618
  • [5] Mixed Case Contextual ASR Using Capitalization Masks
    Caseiro, Diamantino
    Rondon, Pat
    Le The, Quoc-Nam
    Aleksic, Petar
    [J]. INTERSPEECH 2020, 2020, : 686 - 690
  • [6] Che XY, 2016, LREC 2016 - TENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, P654
  • [7] Discriminative Self-training for Punctuation Prediction
    Chen, Qian
    Wang, Wen
    Chen, Mengzhe
    Zhang, Qinglin
    [J]. INTERSPEECH 2021, 2021, : 771 - 775
  • [8] Cho E., 2012, INT WORKSH SPOK LANG
  • [9] Christensen H., 2001, ISCA TUT RES WORKSH
  • [10] Clark K., 2019, INT C LEARN REPR