Quantification of Automatic Speech Recognition System Performance on d/Deaf and Hard of Hearing Speech

被引:0
|
作者
Zhao, Robin [1 ]
Choi, Anna S. G. [2 ]
Koenecke, Allison [2 ]
Rameau, Anais [1 ]
机构
[1] Weill Cornell Med Coll, Sean Parker Inst Voice, New York, NY USA
[2] Cornell Univ, Dept Informat Sci, Ithaca, NY USA
关键词
artificial intelligence; voice; DEAF SPEECH; INTELLIGIBILITY; CHILDREN; PERCEPTION; SKILLS;
D O I
10.1002/lary.31713
中图分类号
R-3 [医学研究方法]; R3 [基础医学];
学科分类号
1001 ;
摘要
ObjectiveTo evaluate the performance of commercial automatic speech recognition (ASR) systems on d/Deaf and hard-of-hearing (d/Dhh) speech.MethodsA corpus containing 850 audio files of d/Dhh and normal hearing (NH) speech from the University of Memphis Speech Perception Assessment Laboratory was tested on four speech-to-text application program interfaces (APIs): Amazon Web Services, Microsoft Azure, Google Chirp, and OpenAI Whisper. We quantified the Word Error Rate (WER) of API transcriptions for 24 d/Dhh and nine NH participants and performed subgroup analysis by speech intelligibility classification (SIC), hearing loss (HL) onset, and primary communication mode.ResultsMean WER averaged across APIs was 10 times higher for the d/Dhh group (52.6%) than the NH group (5.0%). APIs performed significantly worse for "low" and "medium" SIC (85.9% and 46.6% WER, respectively) as compared to "high" SIC group (9.5% WER, comparable to NH group). APIs performed significantly worse for speakers with prelingual HL relative to postlingual HL (80.5% and 37.1% WER, respectively). APIs performed significantly worse for speakers primarily communicating with sign language (70.2% WER) relative to speakers with both oral and sign language communication (51.5%) or oral communication only (19.7%).ConclusionCommercial ASR systems underperform for d/Dhh individuals, especially those with "low" and "medium" SIC, prelingual onset of HL, and sign language as primary communication mode. This contrasts with Big Tech companies' promises of accessibility, indicating the need for ASR systems ethically trained on heterogeneous d/Dhh speech data.Level of Evidence3 Laryngoscope, 2024 Commercial automatic speech recognition (ASR) systems underperform for d/Deaf and hard-of-hearing (d/Dhh) individuals, especially those with "low" and "medium" speech intelligibility classification, prelingual onset of hearing loss, and sign language as primary communication mode. There is a need for ASR systems ethically trained on heterogeneous d/Dhh speech data.image
引用
收藏
页码:191 / 197
页数:7
相关论文
共 50 条
  • [1] Deaf, Hard of Hearing, and Hearing Perspectives on using Automatic Speech Recognition in Conversation
    Glasser, Abraham
    Kushalnagar, Kesavan
    Kushalnagar, Raja
    PROCEEDINGS OF THE 19TH INTERNATIONAL ACM SIGACCESS CONFERENCE ON COMPUTERS AND ACCESSIBILITY (ASSETS'17), 2017, : 427 - 432
  • [2] Feasibility of Using Automatic Speech Recognition with Voices of Deaf and Hard-of-Hearing Individuals
    Glasser, Abraham T.
    Kushalnagar, Kesavan R.
    Kushalnagar, Raja S.
    PROCEEDINGS OF THE 19TH INTERNATIONAL ACM SIGACCESS CONFERENCE ON COMPUTERS AND ACCESSIBILITY (ASSETS'17), 2017, : 373 - 374
  • [3] Towards More Robust Speech Interactions for Deaf and Hard of Hearing Users
    Fok, Raymond
    Kaur, Harmanpreet
    Palani, Skanda
    Mott, Martez E.
    Lasecki, Walter S.
    ASSETS'18: PROCEEDINGS OF THE 20TH INTERNATIONAL ACM SIGACCESS CONFERENCE ON COMPUTERS AND ACCESSIBILITY, 2018, : 57 - 67
  • [4] Applications of automatic speech recognition and text-to-speech technologies for hearing assessment: a scoping review
    Fatehifar, Mohsen
    Schlittenlacher, Josef
    Almufarrij, Ibrahim
    Wong, David
    Cootes, Tim
    Munro, Kevin J.
    INTERNATIONAL JOURNAL OF AUDIOLOGY, 2024,
  • [5] Visualizing speech styles in captions for deaf and hard-of-hearing viewers
    Ahn, Sooyeon
    Kim, Jooyeong
    Shin, Choonsung
    Hong, Jin-Hyuk
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2025, 194
  • [6] Preliminary Evaluation of Automated Speech Recognition Apps for the Hearing Impaired and Deaf
    Pragt, Leontien
    van Hengel, Peter
    Grob, Dagmar
    Wasmann, Jan-Willem A.
    FRONTIERS IN DIGITAL HEALTH, 2022, 4
  • [7] Predicting Early Literacy: Auditory and Visual Speech Decoding in Deaf and Hard-of-Hearing Children
    Couvee, Sascha
    Wauters, Loes
    Verhoeven, Ludo
    Knoors, Harry
    Segers, Eliane
    JOURNAL OF DEAF STUDIES AND DEAF EDUCATION, 2022, : 311 - 323
  • [8] Evaluation of an Automatic Speech Recognition Platform for Dysarthric Speech
    Calvo, Irene
    Tropea, Peppino
    Vigano, Mauro
    Scialla, Maria
    Cavalcante, Agnieszka B.
    Grajzer, Monika
    Gilardone, Marco
    Corbo, Massimo
    FOLIA PHONIATRICA ET LOGOPAEDICA, 2021, 73 (05) : 432 - 441
  • [9] Measuring speech intelligibility with deaf and hard-of-hearing children: A systematic review
    Stefansdottir, Harpa
    Crowe, Kathryn
    Magnusson, Egill
    Guiberson, Mark
    Masdottir, Thora
    Agustsdottir, Inga
    Baldursdottir, Osp, V
    JOURNAL OF DEAF STUDIES AND DEAF EDUCATION, 2024, 29 (02) : 265 - 277
  • [10] How is the McGurk effect modulated by Cued Speech in deaf and hearing adults?
    Bayard, Clemence
    Colin, Cecile
    Leybaert, Jacqueline
    FRONTIERS IN PSYCHOLOGY, 2014, 5