Towards Efficient Recurrent Architectures: A Deep LSTM Neural Network Applied to Speech Enhancement and Recognition

被引:5
作者
Wang, Jing [1 ]
Saleem, Nasir [2 ,3 ]
Gunawan, Teddy Surya [3 ]
机构
[1] Yunnan Univ, Sch Mat Sci & Engn, Kunming City, Yunnan Province, Peoples R China
[2] Gomal Univ, Fac Engn & Technol, Dept Elect Engn, Dera Ismail Khan 29050, Pakistan
[3] Int Islamic Univ Malaysia IIUM, Dept Elect & Comp Engn, Kuala Lumpur, Malaysia
关键词
Deep learning; Speech enhancement; Speech recognition; Skip connections; LSTM; Acoustic features; Attention process; NOISE;
D O I
10.1007/s12559-024-10288-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Long short-term memory (LSTM) has proven effective in modeling sequential data. However, it may encounter challenges in accurately capturing long-term temporal dependencies. LSTM plays a central role in speech enhancement by effectively modeling and capturing temporal dependencies in speech signals. This paper introduces a variable-neurons-based LSTM designed for capturing long-term temporal dependencies by reducing neuron representation in layers with no loss of data. A skip connection between nonadjacent layers is added to prevent gradient vanishing. An attention mechanism in these connections highlights important features and spectral components. Our LSTM is inherently causal, making it well-suited for real-time processing without relying on future information. Training involves utilizing combined acoustic feature sets for improved performance, and the models estimate two time-frequency masks-the ideal ratio mask (IRM) and the ideal binary mask (IBM). Comprehensive evaluation using perceptual evaluation of speech quality (PESQ) and short-time objective intelligibility (STOI) showed that the proposed LSTM architecture demonstrates enhanced speech intelligibility and perceptual quality. Composite measures further substantiated performance, considering residual noise distortion (Cbak) and speech distortion (Csig). The proposed model showed a 16.21% improvement in STOI and a 0.69 improvement in PESQ on the TIMIT database. Similarly, with the LibriSpeech database, the STOI and PESQ showed improvements of 16.41% and 0.71 over noisy mixtures. The proposed LSTM architecture outperforms deep neural networks (DNNs) in different stationary and nonstationary background noisy conditions. To train an automatic speech recognition (ASR) system on enhanced speech, the Kaldi toolkit is used for evaluating word error rate (WER). The proposed LSTM at the front-end notably reduced WERs, achieving a notable 15.13% WER across different noisy backgrounds.
引用
收藏
页码:1221 / 1236
页数:16
相关论文
共 57 条
[1]   Text-independent speaker recognition using LSTM-RNN and speech enhancement [J].
Abd El-Moneim, Samia ;
Nassar, M. A. ;
Dessouky, Moawad I. ;
Ismail, Nabil A. ;
El-Fishawy, Adel S. ;
Abd El-Samie, Fathi E. .
MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (33-34) :24013-24028
[2]  
Baby D, 2019, INT CONF ACOUST SPEE, P106, DOI [10.1109/ICASSP.2019.8683799, 10.1109/icassp.2019.8683799]
[3]   SUPPRESSION OF ACOUSTIC NOISE IN SPEECH USING SPECTRAL SUBTRACTION [J].
BOLL, SF .
IEEE TRANSACTIONS ON ACOUSTICS SPEECH AND SIGNAL PROCESSING, 1979, 27 (02) :113-120
[4]  
Chang B., 2017, ARXIV
[5]   Long short-term memory for speaker generalization in supervised speech separation [J].
Chen, Jitong ;
Wang, DeLiang .
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2017, 141 (06) :4705-4714
[6]   FullSubNet plus : CHANNEL ATTENTION FULLSUBNET WITH COMPLEX SPECTROGRAMS FOR SPEECH ENHANCEMENT [J].
Chen, Jun ;
Wang, Zilin ;
Tuo, Deyi ;
Wu, Zhiyong ;
Kang, Shiyin ;
Meng, Helen .
2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, :7857-7861
[7]  
Defossez A., ARXIV
[8]   SPEECH ENHANCEMENT USING A MINIMUM MEAN-SQUARE ERROR SHORT-TIME SPECTRAL AMPLITUDE ESTIMATOR [J].
EPHRAIM, Y ;
MALAH, D .
IEEE TRANSACTIONS ON ACOUSTICS SPEECH AND SIGNAL PROCESSING, 1984, 32 (06) :1109-1121
[9]  
Fedorov I., 2020, ARXIV
[10]   An attention Long Short-Term Memory based system for automatic classification of speech intelligibility [J].
Fernandez-Diaz, Miguel ;
Gallardo-Antolin, Ascension .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2020, 96