Fast Likelihood Computation in Speech Recognition using Matrices

被引:0
|
作者
Mrugesh R. Gajjar
T. V. Sreenivas
R. Govindarajan
机构
[1] Siemens Corporate Research and Technologies,Department of Electrical Communication Engineering
[2] Indian Institute of Science,Supercomputer Education & Research Centre
[3] Indian Institute of Science,undefined
来源
Journal of Signal Processing Systems | 2013年 / 70卷
关键词
Speech recognition; Acoustic likelihood computations; Low-rank matrix approximation; Euclidean distance matrix computation; Dynamic time warping;
D O I
暂无
中图分类号
学科分类号
摘要
Acoustic modeling using mixtures of multivariate Gaussians is the prevalent approach for many speech processing problems. Computing likelihoods against a large set of Gaussians is required as a part of many speech processing systems and it is the computationally dominant phase for Large Vocabulary Continuous Speech Recognition (LVCSR) systems. We express the likelihood computation as a multiplication of matrices representing augmented feature vectors and Gaussian parameters. The computational gain of this approach over traditional methods is by exploiting the structure of these matrices and efficient implementation of their multiplication. In particular, we explore direct low-rank approximation of the Gaussian parameter matrix and indirect derivation of low-rank factors of the Gaussian parameter matrix by optimum approximation of the likelihood matrix. We show that both the methods lead to similar speedups but the latter leads to far lesser impact on the recognition accuracy. Experiments on 1,138 work vocabulary RM1 task and 6,224 word vocabulary TIMIT task using Sphinx 3.7 system show that, for a typical case the matrix multiplication based approach leads to overall speedup of 46 % on RM1 task and 115 % for TIMIT task. Our low-rank approximation methods provide a way for trading off recognition accuracy for a further increase in computational performance extending overall speedups up to 61 % for RM1 and 119 % for TIMIT for an increase of word error rate (WER) from 3.2 to 3.5 % for RM1 and for no increase in WER for TIMIT. We also express pairwise Euclidean distance computation phase in Dynamic Time Warping (DTW) in terms of matrix multiplication leading to saving of approximately \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}${1} \over {3}$\end{document} of computational operations. In our experiments using efficient implementation of matrix multiplication, this leads to a speedup of 5.6 in computing the pairwise Euclidean distances and overall speedup up to 3.25 for DTW.
引用
收藏
页码:219 / 234
页数:15
相关论文
共 50 条
  • [41] Speech Recognition using Facial sEMG
    Soon, Mok Win
    Anuar, Muhammad Ikmal Hanafi
    Abidin, Mohamad Hafizat Zainal
    Azaman, Ahmad Syukri
    Noor, Norliza Mohd
    2017 IEEE INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING APPLICATIONS (ICSIPA), 2017, : 1 - 5
  • [42] Using speech rhythm knowledge to improve dysarthric speech recognition
    Selouani, S. -A.
    Dahmani, H.
    Amami, R.
    Hamam, H.
    INTERNATIONAL JOURNAL OF SPEECH TECHNOLOGY, 2012, 15 (01) : 57 - 64
  • [43] Speech Disorder Malay Speech Recognition System
    Al-Haddad, S. A. R.
    SENSORS, SIGNALS, VISUALIZATION, IMAGING, SIMULATION AND MATERIALS, 2009, : 69 - 75
  • [44] IMPROVING SPEECH RECOGNITION USING CONSISTENT PREDICTIONS ON SYNTHESIZED SPEECH
    Wang, Gary
    Rosenberg, Andrew
    Chen, Zhehuai
    Zhang, Yu
    Ramabhadran, Bhuvana
    Wu, Yonghui
    Moreno, Pedro
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 7029 - 7033
  • [45] Using speech rhythm knowledge to improve dysarthric speech recognition
    S.-A. Selouani
    H. Dahmani
    R. Amami
    H. Hamam
    International Journal of Speech Technology, 2012, 15 (1) : 57 - 64
  • [46] An Emotion Estimation from Human Speech Using Speech Recognition and Speech Synthesize
    Kurematsu, Masaki
    Ohashi, Marina
    Kinosita, Orimi
    Hakura, Jun
    Fujita, Hamido
    NEW TRENDS IN SOFTWARE METHODOLOGIES, TOOLS AND TECHNIQUES, 2008, 182 : 278 - 289
  • [47] A Fast Convolutional Self-attention Based Speech Dereverberation Method for Robust Speech Recognition
    Li, Nan
    Ge, Meng
    Wang, Longbiao
    Dang, Jianwu
    NEURAL INFORMATION PROCESSING (ICONIP 2019), PT III, 2019, 11955 : 295 - 305
  • [48] Efficient likelihood evaluation and dynamic Gaussian selection for HMM-based speech recognition
    Cai, Jun
    Bouselmi, Ghazi
    Laprie, Yves
    Haton, Jean-Paul
    COMPUTER SPEECH AND LANGUAGE, 2009, 23 (02) : 147 - 164
  • [49] LOW RESOURCE SPEECH RECOGNITION WITH AUTOMATICALLY LEARNED SPARSE INVERSE COVARIANCE MATRICES
    Zhang, Weibin
    Fung, Pascale
    2012 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2012, : 4737 - 4740
  • [50] Qur'an Recognition For The Purpose Of Memorisation Using Speech Recognition Technique
    Abro, Bushra
    Naqvi, Asma Batool
    Hussain, Ayyaz
    2012 15TH INTERNATIONAL MULTITOPIC CONFERENCE (INMIC), 2012, : 28 - 32