Automatic speech recognition using advanced deep learning approaches: A survey

被引:22
作者
Kheddar, Hamza [1 ]
Hemis, Mustapha [2 ]
Himeur, Yassine [3 ]
机构
[1] Univ Medea, Dept Elect Engn, LSEA Lab, Medea 26000, Algeria
[2] Univ Sci & Technol Houari Boumediene USTHB, LCPTS Lab, POB 32, Algiers 16111, Algeria
[3] Univ Dubai, Coll Engn & Informat Technol, Dubai, U Arab Emirates
关键词
Automatic speech recognition; Deep transfer learning; Transformers; Federated learning; Reinforcement learning; LANGUAGE MODEL; TRANSFORMER; NETWORK; ASR; LIGHTWEIGHT;
D O I
10.1016/j.inffus.2024.102422
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advancements in deep learning (DL) have posed a significant challenge for automatic speech recognition (ASR). ASR relies on extensive training datasets, including confidential ones, and demands substantial computational and storage resources. Enabling adaptive systems improves ASR performance in dynamic environments. DL techniques assume training and testing data originate from the same domain, which is not always true. Advanced DL techniques like deep transfer learning (DTL), federated learning (FL), and deep reinforcement learning (DRL) address these issues. DTL allows high-performance models using small yet related datasets, FL enables training on confidential data without dataset possession, and DRL optimizes decision -making in dynamic environments, reducing computation costs. This survey offers a comprehensive review of DTL, FL, and DRL-based ASR frameworks, aiming to provide insights into the latest developments and aid researchers and professionals in understanding the current challenges. Additionally, Transformers, which are advanced DL techniques heavily used in proposed ASR frameworks, are considered in this survey for their ability to capture extensive dependencies in the input ASR sequence. The paper starts by presenting the background of DTL, FL, DRL, and Transformers and then adopts a well -designed taxonomy to outline the state-of-the-art (SOTA) approaches. Subsequently, a critical analysis is conducted to identify the strengths and weaknesses of each framework. Additionally, a comparative study is presented to highlight the existing challenges, paving the way for future research opportunities.
引用
收藏
页数:19
相关论文
共 141 条
  • [1] Bilingual Automatic Speech Recognition: A Review, Taxonomy and Open Challenges
    Abushariah, Ahmad A. M.
    Ting, Hua-Nong
    Mustafa, Mumtaz Begum Peer
    Khairuddin, Anis Salwa Mohd
    Abushariah, Mohammad A. M.
    Tan, Tien-Ping
    [J]. IEEE ACCESS, 2023, 11 : 5944 - 5954
  • [2] Ahmed G., 2023, 2023 Signal Processing: Algorithms, Architectures, Arrangements, and Applications, P14
  • [3] An Online Transfer Learning Framework With Extreme Learning Machine for Automated Credit Scoring
    Alasbahi, Rana
    Zheng, Xiaolin
    [J]. IEEE ACCESS, 2022, 10 (46697-46716): : 46697 - 46716
  • [4] Unsupervised Automatic Speech Recognition: A review
    Aldarmaki, Hanan
    Ullah, Asad
    Ram, Sreepratha
    Zaki, Nazar
    [J]. SPEECH COMMUNICATION, 2022, 139 : 76 - 91
  • [5] TRUNet: Transformer-Recurrent-U Network for End-to-end Multi-channel Reverberant Sound Source Separation
    Aroudi, Ali
    Uhlich, Stefan
    Font, Marc Ferras
    [J]. INTERSPEECH 2022, 2022, : 911 - 915
  • [6] A detailed survey of Turkish automatic speech recognition
    Arslan, Recep Sinan
    Barisci, Necaattin
    [J]. TURKISH JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCES, 2020, 28 (06) : 3253 - 3269
  • [7] Robust Feature Representation Using Multi-Task Learning for Human Activity Recognition
    Azadi, Behrooz
    Haslgruebler, Michael
    Anzengruber-Tanase, Bernhard
    Sopidis, Georgios
    Ferscha, Alois
    [J]. SENSORS, 2024, 24 (02)
  • [8] MAE-AST: Masked Autoencoding Audio Spectrogram Transformer
    Baade, Alan
    Peng, Puyuan
    Harwath, David
    [J]. INTERSPEECH 2022, 2022, : 2438 - 2442
  • [9] A Squeeze-and-Excitation and Transformer-Based Cross-Task Model for Environmental Sound Recognition
    Bai, Jisheng
    Chen, Jianfeng
    Wang, Mou
    Ayub, Muhammad Saad
    Yan, Qingli
    [J]. IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2023, 15 (03) : 1501 - 1513
  • [10] Fast End-to-End Speech Recognition Via Non-Autoregressive Models and Cross-Modal Knowledge Transferring From BERT
    Bai, Ye
    Yi, Jiangyan
    Tao, Jianhua
    Tian, Zhengkun
    Wen, Zhengqi
    Zhang, Shuai
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 : 1897 - 1911