Self-conducted speech audiometry using automatic speech recognition: Simulation results for listeners with hearing loss

被引:8
作者
Ooster, Jasper [1 ,3 ]
Tuschen, Laura [2 ]
Meyer, Bernd T. [1 ,3 ]
机构
[1] Carl von Ossietzky Univ Oldenburg, Commun Acoust, D-26129 Oldenburg, Germany
[2] Fraunhofer Inst Digital Media Technol IDMT, Oldenburg Branch Hearing Speech & Audio Technol HS, D-26129 Oldenburg, Germany
[3] Cluster Excellence Hearing4all, Oldenburg, Germany
关键词
Speech audiometry; Automatic speech recognition; Matrix sentence test; Unsupervised measurement; NOISE; INTELLIGIBILITY; VALIDATION; SENTENCES; TESTS;
D O I
10.1016/j.csl.2022.101447
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Speech-in-noise tests are an important tool for assessing hearing impairment, the successful fitting of hearing aids, as well as for research in psychoacoustics. An important drawback of many speech-based tests is the requirement of an expert to be present during the measurement, in order to assess the listener's performance. This drawback may be largely overcome through the use of automatic speech recognition (ASR), which utilizes automatic response logging. However, such an unsupervised system may reduce the accuracy due to the introduction of potential errors. In this study, two different ASR systems are compared for automated testing: A system with a feed-forward deep neural network (DNN) from a previous study (Ooster et al., 2018), as well as a state-of-the-art system utilizing a time-delay neural network (TDNN). The dynamic measurement procedure of the speech intelligibility test was simulated considering the subjects' hearing loss and selecting from real recordings of test participants. The ASR systems' performance is investigated based on responses of 73 listeners, ranging from normal -hearing to severely hearing-impaired as well as read speech from cochlear implant listeners. The feed-forward DNN produced accurate testing results for NH and unaided HI listeners but a decreased measurement accuracy was found in the simulation of the adaptive measurement procedure when considering aided severely HI listeners, recorded in noisy environments with a loudspeaker setup. The TDNN system produces error rates of 0.6% and 3.0% for deletion and insertion errors, respectively. We estimate that the SRT deviation with this system is below 1.38 dB for 95% of the users. This result indicates that a robust unsupervised conduction of the matrix sentence test is possible with a similar accuracy as with a human supervisor even when considering noisy conditions and altered or disordered speech from elderly severely HI listeners and listeners with a CI.
引用
收藏
页数:14
相关论文
共 50 条
[41]   Effects of Various Extents of High-Frequency Hearing Loss on Speech Recognition and Gap Detection at Low Frequencies in Patients with Sensorineural Hearing Loss [J].
Li, Bei ;
Guo, Yang ;
Yang, Guang ;
Feng, Yanmei ;
Yin, Shankai .
NEURAL PLASTICITY, 2017, 2017
[42]   Using asymmetric windows in automatic speech recognition [J].
Rozman, Robert ;
Kodek, Dusan M. .
SPEECH COMMUNICATION, 2007, 49 (04) :268-276
[43]   Comparing L2 intelligibility for learners of French: Automatic speech recognition versus human listeners [J].
Shimanskaya, Elena .
FOREIGN LANGUAGE ANNALS, 2025, 58 (02) :438-457
[44]   Assessment of L2 intelligibility: Comparing L1 listeners and automatic speech recognition [J].
Inceoglu, Solene ;
Chen, Wen-Hsin ;
Lim, Hyojung .
RECALL, 2023, 35 (01) :89-104
[45]   Modernising speech audiometry: using a smartphone application to test word recognition [J].
van Zyl, Marianne ;
Swanepoel, De Wet ;
Myburgh, Hermanus C. .
INTERNATIONAL JOURNAL OF AUDIOLOGY, 2018, 57 (08) :561-569
[46]   EFFICIENT ADAPTER TRANSFER OF SELF-SUPERVISED SPEECH MODELS FOR AUTOMATIC SPEECH RECOGNITION [J].
Thomas, Bethan ;
Kessler, Samuel ;
Karout, Salah .
2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, :7102-7106
[47]   OBJECTIVE COMPARISON OF SPEECH ENHANCEMENT ALGORITHMS WITH HEARING LOSS SIMULATION [J].
Zhang, Zhuohuang ;
Shen, Yi ;
Williamson, Donald S. .
2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, :6845-6849
[48]   Improving hearing-aid gains based on automatic speech recognition [J].
Fontan, Lionel ;
Le Coz, Maxime ;
Azzopardi, Charlotte ;
Stone, Michael A. ;
Fuellgrabe, Christian .
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2020, 148 (03) :EL227-EL233
[49]   Automatic Speech Recognition Services: Deaf and Hard-of-Hearing Usability [J].
Glasser, Abraham .
CHI EA '19 EXTENDED ABSTRACTS: EXTENDED ABSTRACTS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2019,
[50]   Fundamental Frequency of Child-Directed Speech Using Automatic Speech Recognition [J].
VanDam, Mark ;
De Palma, Paul .
2014 JOINT 7TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING AND INTELLIGENT SYSTEMS (SCIS) AND 15TH INTERNATIONAL SYMPOSIUM ON ADVANCED INTELLIGENT SYSTEMS (ISIS), 2014, :1349-1353