A Near Real-Time Automatic Speaker Recognition Architecture for Voice-Based User Interface

被引:36
作者
Dhakal, Parashar [1 ]
Damacharla, Praveen [2 ]
Javaid, Ahmad Y. [1 ]
Devabhaktuni, Vijay [2 ]
机构
[1] Univ Toledo, Elect Engn & Comp Sci Dept, Toledo, OH 43606 USA
[2] Purdue Univ Northwest, ECE Dept, Hammond, IN 46323 USA
关键词
classifiers; convolution neural network; architecture; feature extraction; machine learning; random forest; speaker recognition; voice interface;
D O I
10.3390/make1010031
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we present a novel pipelined near real-time speaker recognition architecture that enhances the performance of speaker recognition by exploiting the advantages of hybrid feature extraction techniques that contain the features of Gabor Filter (GF), Convolution Neural Networks (CNN), and statistical parameters as a single matrix set. This architecture has been developed to enable secure access to a voice-based user interface (UI) by enabling speaker-based authentication and integration with an existing Natural Language Processing (NLP) system. Gaining secure access to existing NLP systems also served as motivation. Initially, we identify challenges related to real-time speaker recognition and highlight the recent research in the field. Further, we analyze the functional requirements of a speaker recognition system and introduce the mechanisms that can address these requirements through our novel architecture. Subsequently, the paper discusses the effect of different techniques such as CNN, GF, and statistical parameters in feature extraction. For the classification, standard classifiers such as Support Vector Machine (SVM), Random Forest (RF) and Deep Neural Network (DNN) are investigated. To verify the validity and effectiveness of the proposed architecture, we compared different parameters including accuracy, sensitivity, and specificity with the standard AlexNet architecture.
引用
收藏
页码:504 / 520
页数:17
相关论文
共 56 条
[41]  
Pedregosa F., 2011, J MACH LEARN RES, V12, P2825
[42]  
Poria Soujanya, 2015, EMNLP, P2539, DOI DOI 10.18653/V1/D15-1303
[43]  
Rudrapal D., 2012, INT J COMPUT APPL, V39, P12
[44]  
Sainath TN, 2013, INT CONF ACOUST SPEE, P8614, DOI 10.1109/ICASSP.2013.6639347
[45]  
Salehghaffari H., ARXIV180305427
[46]  
Sarwar SS, 2017, I SYMPOS LOW POWER E
[47]  
Selvaraj S.S.P., 2017, DEEP LEARNING SPEAKE
[48]  
Tang Y., 2013, CHALL REPR LEARN WOR
[49]  
Vesely K., 2011, 2011 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU), P42, DOI 10.1109/ASRU.2011.6163903
[50]  
Wainberg M, 2016, J MACH LEARN RES, V17