Timestamp-aligning and keyword-biasing end-to-end ASR front-end for a KWS system

被引:5
作者
Shi, Gui-Xin [1 ]
Zhang, Wei-Qiang [1 ]
Wang, Guan-Bo [1 ]
Zhao, Jing [1 ]
Chai, Shu-Zhou [1 ]
Zhao, Ze-Yu [1 ]
机构
[1] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol, Dept Elect Engn, Beijing, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
OpenSAT20; End-to-end ASR; End-to-end KWS; Force alignment; Biased loss; SPEECH RECOGNITION; ENERGY SCORER; SEARCH; ATTENTION;
D O I
10.1186/s13636-021-00212-9
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Many end-to-end approaches have been proposed to detect predefined keywords. For scenarios of multi-keywords, there are still two bottlenecks that need to be resolved: (1) the distribution of important data that contains keyword(s) is sparse, and (2) the timestamps of the detected keywords are inaccurate. In this paper, to alleviate the first issue and further improve the performance of the end-to-end ASR front-end, we propose the biased loss function for guiding the recognizer to pay more attention to the speech segments containing the predefined keywords. As for the second issue, we solve this problem by modifying the force alignment applied to the end-to-end ASR front-end. To get the frame-level alignment, we utilize a Gaussian Mixture Model-Hidden Markov Model (GMM-HMM) based acoustic model (AM) for auxiliary. The proposed system is evaluated in the OpenSAT20 held by the National Institute of Standards and Technology (NIST). The performance of our end-to-end KWS system is comparable to the conventional hybrid KWS system, sometimes even slightly better. With fusion results of the end-to-end and conventional KWS systems, we won the first prize in the KWS track. On the dev dataset (a part of SAFE-T corpus), the system outperforms the baseline by a large margin, i.e., our system with GMM-HMM aligner has a lower segmentation-aware word error rates (relatively 7.9-19.2% decrease) and higher overall Actual term-weighted values (relatively 3.6-11.0% increase), which demonstrates the effectiveness of the proposed method. For more precise alignments, we can use DNN-based AM as alignmentor at the cost of more computation.
引用
收藏
页数:14
相关论文
共 45 条
[1]   Convolutional Recurrent Neural Networks for Small-Footprint Keyword Spotting [J].
Arik, Sercan O. ;
Kliegl, Markus ;
Child, Rewon ;
Hestness, Joel ;
Gibiansky, Andrew ;
Fougner, Chris ;
Prenger, Ryan ;
Coates, Adam .
18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, :1606-1610
[2]   An overview of decoding techniques for large vocabulary continuous speech recognition [J].
Aubert, XL .
COMPUTER SPEECH AND LANGUAGE, 2002, 16 (01) :89-114
[3]   End-to-End ASR-Free Keyword Search From Speech [J].
Audhkhasi, Kartik ;
Rosenberg, Andrew ;
Sethy, Abhinav ;
Ramabhadran, Bhuvana ;
Kingsbury, Brian .
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2017, 11 (08) :1351-1359
[4]   "Hey Siri, Do You Understand Me?": Virtual Assistants and Dysarthria [J].
Ballati, Fabio ;
Corno, Fulvio ;
De Russis, Luigi .
INTELLIGENT ENVIRONMENTS 2018, 2018, 23 :557-566
[5]  
Bandanau D, 2016, INT CONF ACOUST SPEE, P4945, DOI 10.1109/ICASSP.2016.7472618
[6]  
Bengio Y, 2014, ADV NIPS
[7]  
Brummer N., 2013, ARXIV13042865
[8]  
Chan W, 2016, INT CONF ACOUST SPEE, P4960, DOI 10.1109/ICASSP.2016.7472621
[9]  
Chen X, 2019, 2019 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU 2019), P539, DOI [10.1109/asru46091.2019.9004005, 10.1109/ASRU46091.2019.9004005]
[10]  
Chiu CC, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), P4774, DOI 10.1109/ICASSP.2018.8462105