Time-Frequency Localization Using Deep Convolutional Maxout Neural Network in Persian Speech Recognition

被引:2
|
作者
Dehghani, Arash [1 ]
Seyyedsalehi, Seyyed Ali [1 ]
机构
[1] Amirkabir Univ Technol, Fac Biomed Engn, Hafez Ave, Tehran, Iran
关键词
Time-Frequency Localization; Deep Neural Networks; Convolutional Neural Networks; Speech Recognition; Maxout; Dropout; SPECTROTEMPORAL RECEPTIVE-FIELDS; TASK-RELATED PLASTICITY; FILTER BANK FEATURES; OPTIMIZATION; NEURONS; LAYER; NETS;
D O I
10.1007/s11063-022-11006-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, a CNN-based structure for the time-frequency localization of information is proposed for Persian speech recognition. Research has shown that the receptive fields' spectrotemporal plasticity of some neurons in mammals' primary auditory cortex and midbrain makes localization facilities improve recognition performance. Over the past few years, much work has been done to localize time-frequency information in ASR systems, using the spatial or temporal immutability properties of methods such as HMMs, TDNNs, CNNs, and LSTM-RNNs. However, most of these models have large parameter volumes and are challenging to train. For this purpose, we have presented a structure called Time-Frequency Convolutional Maxout Neural Network (TFCMNN) in which parallel time-domain and frequency-domain 1D-CMNNs are applied simultaneously and independently to the spectrogram, and then their outputs are concatenated and applied jointly to a fully connected Maxout network for classification. To improve the performance of this structure, we have used newly developed methods and models such as Dropout, maxout, and weight normalization. Two sets of experiments were designed and implemented on the FARSDAT dataset to evaluate the performance of this model compared to conventional 1D-CMNN models. According to the experimental results, the average recognition score of TFCMNN models is about 1.6% higher than the average of conventional 1D-CMNN models. In addition, the average training time of the TFCMNN models is about 17 h lower than the average training time of traditional models. Therefore, as proven in other sources, time-frequency localization in ASR systems increases system accuracy and speeds up the training process.
引用
收藏
页码:3205 / 3224
页数:20
相关论文
共 50 条
  • [21] Graphic representation method and neural network recognition of time-frequency vectors of speech information
    Zhirkov, AO
    Kortchagine, DN
    Lukin, AS
    Krylov, AS
    Bayakovskii, YM
    PROGRAMMING AND COMPUTER SOFTWARE, 2003, 29 (04) : 210 - 218
  • [22] Persian Handwritten Character Recognition Using Convolutional Neural Network
    Sarvaramini, Farzin
    Nasrollahzadeh, Alireza
    Soryani, Mohsen
    26TH IRANIAN CONFERENCE ON ELECTRICAL ENGINEERING (ICEE 2018), 2018, : 1676 - 1680
  • [23] Graphic Representation Method and Neural Network Recognition of Time-Frequency Vectors of Speech Information
    A. O. Zhirkov
    D. N. Kortchagine
    A. S. Lukin
    A. S. Krylov
    Yu. M. Bayakovskii
    Programming and Computer Software, 2003, 29 : 210 - 218
  • [24] Audiovisual speech recognition based on a deep convolutional neural network
    Rudregowda S.
    Patilkulkarni S.
    Ravi V.
    H.L. G.
    Krichen M.
    Data Science and Management, 2024, 7 (01): : 25 - 34
  • [25] Localization based stereo speech source separation using probabilistic time-frequency masking and deep neural networks
    Yang Yu
    Wenwu Wang
    Peng Han
    EURASIP Journal on Audio, Speech, and Music Processing, 2016
  • [26] Localization based stereo speech source separation using probabilistic time-frequency masking and deep neural networks
    Yu, Yang
    Wang, Wenwu
    Han, Peng
    EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING, 2016,
  • [27] Seam penetration recognition for GTAW using Convolutional neural network based on time-frequency image of arc sound
    Ren, Wenjing
    Wen, Guangrui
    Liu, Shijie
    Yang, Zhe
    Xu, Bin
    Zhang, Zhifen
    2018 IEEE 23RD INTERNATIONAL CONFERENCE ON EMERGING TECHNOLOGIES AND FACTORY AUTOMATION (ETFA), 2018, : 853 - 860
  • [28] Improvement of Speech Emotion Recognition by Deep Convolutional Neural Network and Speech Features
    Mohanty, Aniruddha
    Cherukuri, Ravindranath C.
    Prusty, Alok Ranjan
    THIRD CONGRESS ON INTELLIGENT SYSTEMS, CIS 2022, VOL 1, 2023, 608 : 117 - 129
  • [29] Speech Emotion Recognition Using Deep Convolutional Neural Network and Simple Recurrent Unit
    Jiang, Pengxu
    Fu, Hongliang
    Tao, Huawei
    ENGINEERING LETTERS, 2019, 27 (04) : 901 - 906
  • [30] Combining audio and visual speech recognition using LSTM and deep convolutional neural network
    Shashidhar R.
    Patilkulkarni S.
    Puneeth S.B.
    International Journal of Information Technology, 2022, 14 (7) : 3425 - 3436