Monotonic Gaussian regularization of attention for robust automatic speech recognition

被引:0
作者
Du, Yeqian [1 ]
Wu, Minghui [1 ,2 ]
Fang, Xin [1 ,2 ]
Yang, Zhouwang [1 ]
机构
[1] Univ Sci & Technol China, Hefei, Anhui, Peoples R China
[2] iFlytek Res, Hefei, Anhui, Peoples R China
关键词
Speech recognition; Multi-head attention; Monotonic alignment; Gaussian distribution; Regularization; Alignment robustness;
D O I
10.1016/j.csl.2022.101405
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The Attention-based Encoder-Decoder (AED) models are one of the most popular models for Automatic Speech Recognition (ASR). However, instability can occur in AED with problems such as incorrect insertions or word repetitions due to the violation of the inherent monotonic alignment property. To address these problems, we propose a monotonic Gaussian regularization method to guide the attention training, where the guiding map is depicted as a sequence of Gaussian distributions with monotonically moving centers. Experiments show our method reduces the insertion error rate by a relative 7% on the HKUST dataset, relative 20% and 16% on two large industrial datasets, and a relative 21% on an out-of-domain test set. The overall Character Error Rates (CERs) are all reduced at the same time, indicating that the model's recognition ability is well maintained. Therefore, our proposed method improves model performance by enhancing monotonic alignment, and provides better robustness.
引用
收藏
页数:11
相关论文
共 42 条
  • [1] [Anonymous], 2011, WORKSH AUT SPEECH RE
  • [2] Arivazhagan Naveen, 2019, Massively multilingual neural machine translation in the wild: Findings and challenges
  • [3] Bandanau D, 2016, INT CONF ACOUST SPEE, P4945, DOI 10.1109/ICASSP.2016.7472618
  • [4] Bu H, 2017, 2017 20TH CONFERENCE OF THE ORIENTAL CHAPTER OF THE INTERNATIONAL COORDINATING COMMITTEE ON SPEECH DATABASES AND SPEECH I/O SYSTEMS AND ASSESSMENT (O-COCOSDA), P58, DOI 10.1109/ICSDA.2017.8384449
  • [5] Chan W, 2016, INT CONF ACOUST SPEE, P4960, DOI 10.1109/ICASSP.2016.7472621
  • [6] Chiu CC, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), P4774, DOI 10.1109/ICASSP.2018.8462105
  • [7] Chorowski J., 2015, ARXIV PREPRINT ARXIV
  • [8] Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
  • [9] Dong L, 2018, PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1, P731
  • [10] Dong LH, 2020, INT CONF ACOUST SPEE, P6079, DOI [10.1109/ICASSP40776.2020.9054250, 10.1109/icassp40776.2020.9054250]