Speaker Identification Under Noisy Conditions Using Hybrid Deep Learning Model

被引:0
作者
Lambamo, Wondimu [1 ]
Srinivasagan, Ramasamy [1 ,2 ]
Jifara, Worku [1 ]
机构
[1] Adama Sci & Technol Univ, Adama 1888, Ethiopia
[2] King Faisal Univ, Al Hasa 31982, Saudi Arabia
来源
PAN-AFRICAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, PT I, PANAFRICON AI 2023 | 2024年 / 2068卷
关键词
Speaker Identification; Convolutional Neural Network; Cochleogram; Bidirectional Gated Recurrent Unit; Real-World Noises; FEATURES; MFCC; VERIFICATION;
D O I
10.1007/978-3-031-57624-9_9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Speaker identification is a biometric mechanism that determines a person who is speaking from a set of known speakers. It has vital applications in areas like security, surveillance, forensic investigations, and others. The accuracy of speaker identification systems was good by using clean speech. However, the speaker identification system performance gets degraded under noisy and mismatched conditions. Recently, a network of hybrid convolutional neural networks (CNN) and enhanced recurrent neural network (RNN) variants have performed better in speech recognition, image classification, and other pattern recognition. Moreover, cochleogram features have shown better accuracy in speech and speaker recognition under noisy conditions. However, there is no attempt conducted in speaker recognition using hybrid CNN and enhanced RNN variants with the cochleogram input to enhance the models' accuracy in noisy environments. This study proposes a speaker identification for noisy conditions using a hybrid CNN and bidirectional gated recurrent unit (BiGRU) network on the cochleogram input. The models were evaluated by using the VoxCeleb1 speech dataset with real-world noise, white Gaussian noises (WGN), and without additive noise. Real-world noises andWGN were added to the dataset at the signal-to-noise ratio (SNR) of -5 dB up to 20 dB with 5 dB intervals. The proposed model attained an accuracy of 93.15%, 97.55%, and 98.60% on the dataset with real-world noises at SNR of -5 dB, 10 dB, and 20 dB, respectively. The proposed model shows approximately similar performance on both real-world noise andWGN at similar SNR levels. Using the dataset without additive noise the model achieved 98.85% accuracy. The evaluation accuracy and the comparison with the previous works indicate that our model has better accuracy.
引用
收藏
页码:154 / 175
页数:22
相关论文
共 48 条
[1]  
Abdul R., 2021, 2020 2 INT C EL CONT
[2]   Modeling prosodic differences for speaker recognition [J].
Adami, Andre Gustavo .
SPEECH COMMUNICATION, 2007, 49 (04) :277-291
[3]  
Ahmed S., 2021, 5 INT C EL ENG INF C
[4]  
Ajgou R, 2014, 2014 11TH INTERNATIONAL SYMPOSIUM ON WIRELESS COMMUNICATIONS SYSTEMS (ISWCS), P722, DOI 10.1109/ISWCS.2014.6933448
[5]   Multitaper MFCC and PLP features for speaker verification using i-vectors [J].
Alam, Md Jahangir ;
Kinnunen, Tomi ;
Kenny, Patrick ;
Ouellet, Pierre ;
O'Shaughnessy, Douglas .
SPEECH COMMUNICATION, 2013, 55 (02) :237-251
[6]  
Alegre F, 2014, I W BIOMETRIC FORENS
[7]  
Ashar A, 2020, PROC 2020 INT C EMER, P1, DOI DOI 10.1109/ICETST49965.2020.9080730
[8]  
Ayoub B., 2016, 2016 INT C INF TECHN
[9]  
Bader M., 2022, 2022 INT C EL COMP T
[10]  
Bunrit Supaporn, 2019, International Journal of Machine Learning and Computing, V9, P143, DOI 10.18178/ijmlc.2019.9.2.778