Speaker Identification Under Noisy Conditions Using Hybrid Deep Learning Model

被引:0
|
作者
Lambamo, Wondimu [1 ]
Srinivasagan, Ramasamy [1 ,2 ]
Jifara, Worku [1 ]
机构
[1] Adama Sci & Technol Univ, Adama 1888, Ethiopia
[2] King Faisal Univ, Al Hasa 31982, Saudi Arabia
来源
PAN-AFRICAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, PT I, PANAFRICON AI 2023 | 2024年 / 2068卷
关键词
Speaker Identification; Convolutional Neural Network; Cochleogram; Bidirectional Gated Recurrent Unit; Real-World Noises; FEATURES; MFCC; VERIFICATION;
D O I
10.1007/978-3-031-57624-9_9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Speaker identification is a biometric mechanism that determines a person who is speaking from a set of known speakers. It has vital applications in areas like security, surveillance, forensic investigations, and others. The accuracy of speaker identification systems was good by using clean speech. However, the speaker identification system performance gets degraded under noisy and mismatched conditions. Recently, a network of hybrid convolutional neural networks (CNN) and enhanced recurrent neural network (RNN) variants have performed better in speech recognition, image classification, and other pattern recognition. Moreover, cochleogram features have shown better accuracy in speech and speaker recognition under noisy conditions. However, there is no attempt conducted in speaker recognition using hybrid CNN and enhanced RNN variants with the cochleogram input to enhance the models' accuracy in noisy environments. This study proposes a speaker identification for noisy conditions using a hybrid CNN and bidirectional gated recurrent unit (BiGRU) network on the cochleogram input. The models were evaluated by using the VoxCeleb1 speech dataset with real-world noise, white Gaussian noises (WGN), and without additive noise. Real-world noises andWGN were added to the dataset at the signal-to-noise ratio (SNR) of -5 dB up to 20 dB with 5 dB intervals. The proposed model attained an accuracy of 93.15%, 97.55%, and 98.60% on the dataset with real-world noises at SNR of -5 dB, 10 dB, and 20 dB, respectively. The proposed model shows approximately similar performance on both real-world noise andWGN at similar SNR levels. Using the dataset without additive noise the model achieved 98.85% accuracy. The evaluation accuracy and the comparison with the previous works indicate that our model has better accuracy.
引用
收藏
页码:154 / 175
页数:22
相关论文
共 50 条
  • [1] Effect of Nonlinear Compression Function on the Performance of the Speaker Identification System under Noisy Conditions
    Jawarkar, Naresh P.
    Holambe, Raghunath S.
    Basu, Tapan Kumar
    PERCEPTION AND MACHINE INTELLIGENCE, 2015, 2015, : 137 - 144
  • [2] Robust Speaker Identification in Noisy and Reverberant Conditions
    Zhao, Xiaojia
    Wang, Yuxuan
    Wang, DeLiang
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2014, 22 (04) : 836 - 845
  • [3] ROBUST SPEAKER IDENTIFICATION IN NOISY AND REVERBERANT CONDITIONS
    Zhao, Xiaojia
    Wang, Yuxuan
    Wang, DeLiang
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [4] Neural-Response-Based Text-Dependent Speaker Identification Under Noisy Conditions
    Islam, M. A.
    Zilany, M. S. A.
    Wissam, A. J.
    INTERNATIONAL CONFERENCE FOR INNOVATION IN BIOMEDICAL ENGINEERING AND LIFE SCIENCES, ICIBEL2015, 2016, 56 : 11 - 14
  • [5] CASA-based speaker identification using cascaded GMM-CNN classifier in noisy and emotional talking conditions
    Nassif, Ali Bou
    Shahin, Ismail
    Hamsa, Shibani
    Nemmour, Nawel
    Hirose, Keikichi
    APPLIED SOFT COMPUTING, 2021, 103 (103)
  • [6] A deep learning approach for robust speaker identification using chroma energy normalized statistics and mel frequency cepstral coefficients
    Abraham, J. V. Thomas
    Khan, A. Nayeemulla
    Shahina, A.
    INTERNATIONAL JOURNAL OF SPEECH TECHNOLOGY, 2021, 26 (3) : 579 - 587
  • [7] Hybrid machine learning classification scheme for speaker identification
    Karthikeyan, V
    Priyadharsini, Suja S.
    JOURNAL OF FORENSIC SCIENCES, 2022, 67 (03) : 1033 - 1048
  • [8] Speaker identification and localization using shuffled MFCC features and deep learning
    Barhoush M.
    Hallawa A.
    Schmeink A.
    International Journal of Speech Technology, 2023, 26 (01) : 185 - 196
  • [9] Deep Learning and Machine Learning Techniques Applied to Speaker Identification on Small Datasets
    Manfron, Enrico
    Teixeira, Joao Paulo
    Minetto, Rodrigo
    OPTIMIZATION, LEARNING ALGORITHMS AND APPLICATIONS, PT II, OL2A 2023, 2024, 1982 : 195 - 210
  • [10] A deep learning approach for robust speaker identification using chroma energy normalized statistics and mel frequency cepstral coefficients
    J. V. Thomas Abraham
    A. Nayeemulla Khan
    A. Shahina
    International Journal of Speech Technology, 2023, 26 : 579 - 587