Automatic Speech Recognition Advancements for Indigenous Languages of the Americas

被引:1
|
作者
Romero, Monica [1 ]
Gomez-Canaval, Sandra [1 ]
Torre, Ivan G. [1 ]
机构
[1] Univ Politecn Madrid, ETS Comp Syst Engn, Madrid 28031, Spain
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 15期
关键词
automatic speech recognition; natural language processing; low-resource languages; Indigenous languages; NeurIPS; LOW-RESOURCE LANGUAGES;
D O I
10.3390/app14156497
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Indigenous languages are a fundamental legacy in the development of human communication, embodying the unique identity and culture of local communities in America. The Second AmericasNLP Competition Track 1 of NeurIPS 2022 proposed the task of training automatic speech recognition (ASR) systems for five Indigenous languages: Quechua, Guarani, Bribri, Kotiria, and Wa'ikhana. In this paper, we describe the fine-tuning of a state-of-the-art ASR model for each target language, using approximately 36.65 h of transcribed speech data from diverse sources enriched with data augmentation methods. We systematically investigate, using a Bayesian search, the impact of the different hyperparameters on the Wav2vec2.0 XLS-R variants of 300 M and 1 B parameters. Our findings indicate that data and detailed hyperparameter tuning significantly affect ASR accuracy, but language complexity determines the final result. The Quechua model achieved the lowest character error rate (CER) (12.14), while the Kotiria model, despite having the most extensive dataset during the fine-tuning phase, showed the highest CER (36.59). Conversely, with the smallest dataset, the Guarani model achieved a CER of 15.59, while Bribri and Wa'ikhana obtained, respectively, CERs of 34.70 and 35.23. Additionally, Sobol' sensitivity analysis highlighted the crucial roles of freeze fine-tuning updates and dropout rates. We release our best models for each language, marking the first open ASR models for Wa'ikhana and Kotiria. This work opens avenues for future research to advance ASR techniques in preserving minority Indigenous languages.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] Automatic Speech Recognition in Different Languages Using High-Density Surface Electromyography Sensors
    Zhu, Mingxing
    Huang, Zhen
    Wang, Xiaochen
    Wang, Xin
    Wang, Cheng
    Zhang, Haoshi
    Zhao, Guoru
    Chen, Shixiong
    Li, Guanglin
    IEEE SENSORS JOURNAL, 2021, 21 (13) : 14155 - 14167
  • [22] Automatic speech recognition: a survey
    Malik, Mishaim
    Malik, Muhammad Kamran
    Mehmood, Khawar
    Makhdoom, Imran
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (06) : 9411 - 9457
  • [23] NETWORKS FOR SPEECH ENHANCEMENT AND AUTOMATIC SPEECH RECOGNITION
    Vu, Thanh T.
    Bigot, Benjamin
    Chng, Eng Siong
    2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS, 2016, : 499 - 503
  • [24] Efficient automatic speech recognition
    O'Shaughnessy, D
    PROCEEDINGS OF THE EIGHTH IASTED INTERNATIONAL CONFERENCE ON INTERNET AND MULTIMEDIA SYSTEMS AND APPLICATIONS, 2004, : 323 - 327
  • [25] A Survey of Automatic Speech Recognition for Dysarthric Speech
    Qian, Zhaopeng
    Xiao, Kejing
    ELECTRONICS, 2023, 12 (20)
  • [26] ASR for Documenting Acutely Under-Resourced Indigenous Languages
    Jimerson, Robbie
    Prud'hommeaux, Emily
    PROCEEDINGS OF THE ELEVENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2018), 2018, : 4161 - 4166
  • [27] Transformer-Based Turkish Automatic Speech Recognition
    Tasar, Davut Emre
    Koruyan, Kutan
    Cilgin, Cihan
    ACTA INFOLOGICA, 2024, 8 (01): : 1 - 10
  • [28] Hybrid deep learning based automatic speech recognition model for recognizing non-Indian languages
    Gupta, Astha
    Kumar, Rakesh
    Kumar, Yogesh
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (10) : 30145 - 30166
  • [29] Hybrid deep learning based automatic speech recognition model for recognizing non-Indian languages
    Astha Gupta
    Rakesh Kumar
    Yogesh Kumar
    Multimedia Tools and Applications, 2024, 83 : 30145 - 30166
  • [30] OpenASR21: The Second Open Challenge for Automatic Speech Recognition of Low-Resource Languages
    Peterson, Kay
    Tong, Audrey
    Yu, Yan
    INTERSPEECH 2022, 2022, : 4895 - 4899