Self-conducted speech audiometry using automatic speech recognition: Simulation results for listeners with hearing loss

被引:7
作者
Ooster, Jasper [1 ,3 ]
Tuschen, Laura [2 ]
Meyer, Bernd T. [1 ,3 ]
机构
[1] Carl von Ossietzky Univ Oldenburg, Commun Acoust, D-26129 Oldenburg, Germany
[2] Fraunhofer Inst Digital Media Technol IDMT, Oldenburg Branch Hearing Speech & Audio Technol HS, D-26129 Oldenburg, Germany
[3] Cluster Excellence Hearing4all, Oldenburg, Germany
关键词
Speech audiometry; Automatic speech recognition; Matrix sentence test; Unsupervised measurement; NOISE; INTELLIGIBILITY; VALIDATION; SENTENCES; TESTS;
D O I
10.1016/j.csl.2022.101447
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Speech-in-noise tests are an important tool for assessing hearing impairment, the successful fitting of hearing aids, as well as for research in psychoacoustics. An important drawback of many speech-based tests is the requirement of an expert to be present during the measurement, in order to assess the listener's performance. This drawback may be largely overcome through the use of automatic speech recognition (ASR), which utilizes automatic response logging. However, such an unsupervised system may reduce the accuracy due to the introduction of potential errors. In this study, two different ASR systems are compared for automated testing: A system with a feed-forward deep neural network (DNN) from a previous study (Ooster et al., 2018), as well as a state-of-the-art system utilizing a time-delay neural network (TDNN). The dynamic measurement procedure of the speech intelligibility test was simulated considering the subjects' hearing loss and selecting from real recordings of test participants. The ASR systems' performance is investigated based on responses of 73 listeners, ranging from normal -hearing to severely hearing-impaired as well as read speech from cochlear implant listeners. The feed-forward DNN produced accurate testing results for NH and unaided HI listeners but a decreased measurement accuracy was found in the simulation of the adaptive measurement procedure when considering aided severely HI listeners, recorded in noisy environments with a loudspeaker setup. The TDNN system produces error rates of 0.6% and 3.0% for deletion and insertion errors, respectively. We estimate that the SRT deviation with this system is below 1.38 dB for 95% of the users. This result indicates that a robust unsupervised conduction of the matrix sentence test is possible with a similar accuracy as with a human supervisor even when considering noisy conditions and altered or disordered speech from elderly severely HI listeners and listeners with a CI.
引用
收藏
页数:14
相关论文
共 50 条
[21]   Automatic Speech Recognition of Disordered Speech: Personalized models outperforming human listeners on short phrases [J].
Green, Jordan R. ;
MacDonald, Robert L. ;
Jiang, Pan-Pan ;
Cattiau, Julie ;
Heywood, Rus ;
Cave, Richard ;
Seaver, Katie ;
Ladewig, Marilyn A. ;
Tobin, Jimmy ;
Brenner, Michael P. ;
Nelson, Philip C. ;
Tomanek, Katrin .
INTERSPEECH 2021, 2021, :4778-4782
[22]   An algorithm to improve speech recognition in noise for hearing-impaired listeners [J].
Healy, Eric W. ;
Yoho, Sarah E. ;
Wang, Yuxuan ;
Wang, DeLiang .
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2013, 134 (04) :3029-3038
[23]   Speech Recognition and Listening Effort in Cochlear Implant Recipients and Normal-Hearing Listeners [J].
Abdel-Latif, Khaled H. A. ;
Meister, Hartmut .
FRONTIERS IN NEUROSCIENCE, 2022, 15
[24]   AUTOMATIC SPEECH RECOGNITION TO AID THE HEARING-IMPAIRED - PROSPECTS FOR THE AUTOMATIC-GENERATION OF CUED SPEECH [J].
UCHANSKI, RM ;
DELHORNE, LA ;
DIX, AK ;
BRAIDA, LD ;
REED, CM ;
DURLACH, NI .
JOURNAL OF REHABILITATION RESEARCH AND DEVELOPMENT, 1994, 31 (01) :20-41
[25]   Perceptual and Automatic Evaluations of the Intelligibility of Speech Degraded by Noise Induced Hearing Loss Simulation [J].
Laaridh, Imed ;
Tardieu, Julien ;
Magnen, Cynthia ;
Gaillard, Pascal ;
Farinas, Jerome ;
Pinquier, Julien .
19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, :2953-2957
[26]   Effect of Hearing Aid Bandwidth on Speech Recognition Performance of Listeners Using a Cochlear Implant and Contralateral Hearing Aid (Bimodal Hearing) [J].
Neuman, Arlene C. ;
Svirsky, Mario A. .
EAR AND HEARING, 2013, 34 (05) :553-561
[27]   Prediction of speech recognition from audibility in older listeners with hearing loss: Effects of age, amplification, and background noise [J].
Souza, Pamela E. ;
Boike, Kumiko T. ;
Witherell, Kerry ;
Tremblay, Kelly .
JOURNAL OF THE AMERICAN ACADEMY OF AUDIOLOGY, 2007, 18 (01) :54-65
[28]   Quantification of Automatic Speech Recognition System Performance on d/Deaf and Hard of Hearing Speech [J].
Zhao, Robin ;
Choi, Anna S. G. ;
Koenecke, Allison ;
Rameau, Anais .
LARYNGOSCOPE, 2025, 135 (01) :191-197
[29]   The recognition of time-compressed speech as a function of age in listeners with cochlear implants or normal hearing [J].
Tinnemore, Anna R. ;
Montero, Lauren ;
Gordon-Salant, Sandra ;
Goupell, Matthew J. .
FRONTIERS IN AGING NEUROSCIENCE, 2022, 14
[30]   Comparing Speech Recognition for Listeners With Normal and Impaired Hearing: Simulations for Controlling Differences in Speech Levels and Spectral Shape [J].
Fogerty, Daniel ;
Madorskiy, Rachel ;
Ahlstrom, Jayne B. ;
Dubno, Judy R. .
JOURNAL OF SPEECH LANGUAGE AND HEARING RESEARCH, 2020, 63 (12) :4289-4299