Cognitively Inspired Feature Extraction and Speech Recognition for Automated Hearing Loss Testing

被引:0
|
作者
Shibli Nisar
Muhammad Tariq
Ahsan Adeel
Mandar Gogate
Amir Hussain
机构
[1] National University of Computer and Emerging Sciences,School of Mathematics and Computer Science
[2] Princeton University,Edinburgh Napier University
[3] University of Stirling,Taibah Valley
[4] deepCI,undefined
[5] University of Wolverhampton,undefined
[6] School of Computing,undefined
[7] Taibah University,undefined
来源
Cognitive Computation | 2019年 / 11卷
关键词
Hearing loss; Speech recognition; Machine learning; Automation; Cognitive radio;
D O I
暂无
中图分类号
学科分类号
摘要
Hearing loss, a partial or total inability to hear, is one of the most commonly reported disabilities. A hearing test can be carried out by an audiologist to assess a patient’s auditory system. However, the procedure requires an appointment, which can result in delays and practitioner fees. In addition, there are often challenges associated with the unavailability of equipment and qualified practitioners, particularly in remote areas. This paper presents a novel idea that automatically identifies any hearing impairment based on a cognitively inspired feature extraction and speech recognition approach. The proposed system uses an adaptive filter bank with weighted Mel-frequency cepstral coefficients for feature extraction. The adaptive filter bank implementation is inspired by the principle of spectrum sensing in cognitive radio that is aware of its environment and adapts to statistical variations in the input stimuli by learning from the environment. Comparative performance evaluation demonstrates the potential of our automated hearing test method to achieve comparable results to the clinical ground truth, established by the expert audiologist’s tests. The overall absolute error of the proposed model when compared with the expert audiologist test is less than 4.9 dB and 4.4 dB for the pure tone and speech audiometry tests, respectively. The overall accuracy achieved is 96.67% with a hidden Markov model (HMM). The proposed method potentially offers a second opinion to audiologists, and serves as a cost-effective pre-screening test to predict hearing loss at an early stage. In future work, authors intend to explore the application of advanced deep learning and optimization approaches to further enhance the performance of the automated testing prototype considering imperfect datasets with real-world background noise.
引用
收藏
页码:489 / 502
页数:13
相关论文
共 50 条
  • [21] Feature extraction for HMM speech recognition systems using DTW
    Go, J
    Hyun, D
    Lee, C
    6TH WORLD MULTICONFERENCE ON SYSTEMICS, CYBERNETICS AND INFORMATICS, VOL III, PROCEEDINGS: IMAGE, ACOUSTIC, SPEECH AND SIGNAL PROCESSING I, 2002, : 241 - 244
  • [22] Introducing Temporal Asymmetries in Feature Extraction for Automatic Speech Recognition
    Sivaram, G. S. V. S.
    Hermansky, Hynek
    INTERSPEECH 2008: 9TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2008, VOLS 1-5, 2008, : 890 - 893
  • [23] An auditory neural feature extraction method for robust speech recognition
    Guo, Wei
    Zhang, Liqing
    Xia, Bin
    2007 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL IV, PTS 1-3, 2007, : 793 - +
  • [24] LPC AND LPCC METHOD OF FEATURE EXTRACTION IN SPEECH RECOGNITION SYSTEM
    Gupta, Harshita
    Gupta, Divya
    2016 6th International Conference - Cloud System and Big Data Engineering (Confluence), 2016, : 498 - 502
  • [25] Feature extraction based on auditory representations for robust speech recognition
    Kim, DS
    Lee, SY
    Kil, RM
    Zhu, XL
    ELECTRONICS LETTERS, 1997, 33 (01) : 15 - 16
  • [26] A Correlational Discriminant Approach to Feature Extraction for Robust Speech Recognition
    Tomar, Vikrant Singh
    Rose, Richard C.
    13TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2012 (INTERSPEECH 2012), VOLS 1-3, 2012, : 554 - 557
  • [27] Robust Feature Extraction for Speech Recognition by Enhancing Auditory Spectrum
    Alam, Md Jahangir
    Kenny, Patrick
    O'Shaughnessy, Douglas
    13TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2012 (INTERSPEECH 2012), VOLS 1-3, 2012, : 1358 - 1361
  • [28] Effects of Simulated and Profound Unilateral Sensorineural Hearing Loss on Recognition of Speech in Competing Speech
    Asp, Filip
    Reinfeldt, Sabine
    EAR AND HEARING, 2020, 41 (02) : 411 - 419
  • [29] Speech Recognition and Lip Shape Feature Extraction for English Vowel Pronunciation of the Hearing-Impaired Based on SVM Technique
    Han, Kyung-Im
    Park, Hye-Jung
    Lee, Kun-Min
    2016 INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (BIGCOMP), 2016, : 293 - 296
  • [30] The effects of speech and speechlike maskers on unaided and aided speech recognition in persons with hearing loss
    Hornsby, Benjamin W. Y.
    Ricketts, Todd A.
    Johnson, Earl E.
    JOURNAL OF THE AMERICAN ACADEMY OF AUDIOLOGY, 2006, 17 (06) : 432 - 447