The Detection of Dysarthria Severity Levels Using AI Models: A Review

被引:4
作者
Al-Ali, Afnan [1 ]
Al-Maadeed, Somaya [1 ]
Saleh, Moutaz [1 ]
Naidu, Rani Chinnappa [2 ]
Alex, Zachariah C. [2 ]
Ramachandran, Prakash [2 ]
Khoodeeram, Rajeev [3 ]
Kumar, Rajesh M. [2 ]
机构
[1] Qatar Univ, Comp Sci & Engn Dept, Doha, Qatar
[2] Vellore Inst Technol, Vellore 632014, Tamil Nadu, India
[3] Univ Mascareignes, Fac Sustainable Dev & Engn, Beau Bassin Rose Hill 71203, Mauritius
关键词
Feature extraction; Speech processing; Lips; Spectrogram; Medical services; Neurological diseases; Artificial intelligence; Classification algorithms; Speech analysis; Dysarthria; classification; severity levels; artificial intelligence (AI)-based models; intelligibility; INTELLIGIBILITY ASSESSMENT; SPEECH-INTELLIGIBILITY; CLASSIFICATION; INDIVIDUALS; SELECTION; FEATURES; DATABASE; TIME;
D O I
10.1109/ACCESS.2024.3382574
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Dysarthria, a speech disorder stemming from neurological conditions, affects communication and life quality. Precise classification and severity assessment are pivotal for therapy but are often subjective in traditional speech-language pathologist evaluations. Machine learning models offer objective assessment potential, enhancing diagnostic precision. This systematic review aims to comprehensively analyze current methodologies for classifying dysarthria based on severity levels, highlighting effective features for automatic classification and optimal AI techniques. We systematically reviewed the literature on the automatic classification of dysarthria severity levels. Sources of information will include electronic databases and grey literature. Selection criteria will be established based on relevance to the research questions. The findings of this systematic review will contribute to the current understanding of dysarthria classification, inform future research, and support the development of improved diagnostic tools. The implications of these findings could be significant in advancing patient care and improving therapeutic outcomes for individuals affected by dysarthria.
引用
收藏
页码:48223 / 48238
页数:16
相关论文
共 108 条
[1]  
Abeer Muneer Altaher Abeer Muneer Altaher, 2019, The Open Public Health Journal, V12, P384, DOI 10.2174/1874944501912010384
[2]   An algorithm for the automatic differentiation between the speech of normals and patients with Friedreich's ataxia based on the short-time fractal dimension [J].
Accardo, AP ;
Mumolo, E .
COMPUTERS IN BIOLOGY AND MEDICINE, 1998, 28 (01) :75-89
[3]   KINEMATIC ANALYSIS OF LOWER LIP MOVEMENTS IN ATAXIC DYSARTHRIA [J].
ACKERMANN, H ;
HERTRICH, I ;
SCHARF, G .
JOURNAL OF SPEECH AND HEARING RESEARCH, 1995, 38 (06) :1252-1259
[4]   An empirical comparison of conventional techniques, neural networks and the three stage hybrid Adaptive Neuro Fuzzy Inference System (ANFIS) model for credit scoring analysis: The case of Turkish credit card data [J].
Akkoc, Soner .
EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2012, 222 (01) :168-178
[5]   Classification of Dysarthric Speech According to the Severity of Impairment: an Analysis of Acoustic Features [J].
Al-Qatab, Bassam Ali ;
Mustafa, Mumtaz Begum .
IEEE ACCESS, 2021, 9 :18183-18194
[6]  
[Anonymous], 2010, PROC WORKSHOP SPEECH
[7]   Automatic Speaker Recognition Using Mel-Frequency Cepstral Coefficients Through Machine Learning [J].
Ayvaz, Ugur ;
Guruler, Huseyin ;
Khan, Faheem ;
Ahmed, Naveed ;
Whangbo, Taegkeun ;
Bobomirzaevich, Abdusalomov Akmalbek .
CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 71 (03) :5511-5521
[8]   Automatic detection of amyotrophic lateral sclerosis (ALS) from video-based analysis of facial movements: speech and non-speech tasks [J].
Bandini, Andrea ;
Green, Jordan R. ;
Taati, Babak ;
Orlandi, Silvia ;
Zinman, Lorne ;
Yunusova, Yana .
PROCEEDINGS 2018 13TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION (FG 2018), 2018, :150-157
[9]  
Barkmeier-Kraemer JM, 2017, TREMOR OTHER HYPERK, V7, DOI 10.7916/D8Z32B30
[10]  
Bhat C, 2017, INT CONF ACOUST SPEE, P5070, DOI 10.1109/ICASSP.2017.7953122