Automatic Speech Recognition for Uyghur, Kazakh, and Kyrgyz: An Overview

被引:9
作者
Du, Wenqiang [1 ]
Maimaitiyiming, Yikeremu [2 ]
Nijat, Mewlude [2 ]
Li, Lantian [3 ]
Hamdulla, Askar [2 ]
Wang, Dong [1 ]
机构
[1] Tsinghua Univ, Ctr Speech & Language Technol, BNRist, Beijing 100084, Peoples R China
[2] Xinjiang Univ, Sch Informat Sci & Engn, Urumqi 830017, Peoples R China
[3] Beijing Univ Posts & Telecommun, Sch Artificial Intelligence, Beijing 100876, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 01期
关键词
overview; automatic speech recognition; low-resource; Uyghur; Kazakh; Kyrgyz; UNDER-RESOURCED LANGUAGES; DEEP NEURAL-NETWORKS; ASR;
D O I
10.3390/app13010326
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
With the emergence of deep learning, the performance of automatic speech recognition (ASR) systems has remarkably improved. Especially for resource-rich languages such as English and Chinese, commercial usage has been made feasible in a wide range of applications. However, most languages are low-resource languages, presenting three main difficulties for the development of ASR systems: (1) the scarcity of the data; (2) the uncertainty in the writing and pronunciation; (3) the individuality of each language. Uyghur, Kazakh, and Kyrgyz as examples are all low-resource languages, involving clear geographical variation in their pronunciation, and each language possesses its own unique acoustic properties and phonological rules. On the other hand, they all belong to the Altaic language family of the Altaic branch, so they share many commonalities. This paper presents an overview of speech recognition techniques developed for Uyghur, Kazakh, and Kyrgyz, with the purposes of (1) highlighting the techniques that are specifically effective for each language and generally effective for all of them and (2) discovering the important factors in promoting the speech recognition research of low-resource languages, by a comparative study of the development path of these three neighboring languages.
引用
收藏
页数:25
相关论文
共 167 条
[1]   Convolutional Neural Networks for Speech Recognition [J].
Abdel-Hamid, Ossama ;
Mohamed, Abdel-Rahman ;
Jiang, Hui ;
Deng, Li ;
Penn, Gerald ;
Yu, Dong .
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2014, 22 (10) :1533-1545
[2]  
Abdel-Hamid O, 2012, INT CONF ACOUST SPEE, P4277, DOI 10.1109/ICASSP.2012.6288864
[3]  
Abilhayer Dawel, 2016, Computer Engineering and Applications, V52, P178, DOI 10.3778/j.issn.1002-8331.1605-0240
[4]  
Ablimit M, 2017, ASIAPAC SIGN INFO PR, P737, DOI 10.1109/APSIPA.2017.8282131
[5]  
Ablimit M, 2010, INT CONF SIGN PROCES, P581, DOI 10.1109/ICOSP.2010.5656065
[6]  
Abulhasm U., 2014, J COURSE ED RES, P88
[7]  
Abulimiti A, 2020, PROCEEDINGS OF THE 12TH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2020), P6444
[8]  
Adams O, 2019, Arxiv, DOI arXiv:1904.02210
[9]  
[艾斯卡尔·肉孜 Aisikaer Rouzi], 2017, [清华大学学报. 自然科学版, Journal of Tsinghua University. Science and Technology], V57, P182
[10]   Unsupervised Automatic Speech Recognition: A review [J].
Aldarmaki, Hanan ;
Ullah, Asad ;
Ram, Sreepratha ;
Zaki, Nazar .
SPEECH COMMUNICATION, 2022, 139 :76-91