Automatic text-independent speaker verification using convolutional deep belief network

被引:2
作者
Rakhmanenko, I. A. [1 ]
Shelupanov, A. A. [1 ]
Kostyuchenko, E. Y. [1 ]
机构
[1] Tomsk State Univ Control Syst & Radioelect, Prospect Lenina 40, Tomsk 634050, Russia
关键词
speaker recognition; speaker verification; Gaussian mixture models; GMM-UBM system; speech features; speech processing; deep learning; neural networks; pattern recognition; RECOGNITION;
D O I
10.18287/2412-6179-CO-621
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
This paper is devoted to the use of the convolutional deep belief network as a speech feature extractor for automatic text-independent speaker verification. The paper describes the scope and problems of automatic speaker verification systems. Types of modern speaker verification systems and types of speech features used in speaker verification systems are considered. The structure and learning algorithm of convolutional deep belief networks is described. The use of speech features extracted from three layers of a trained convolution deep belief network is proposed. Experimental studies of the proposed features were performed on two speech corpora: own speech corpus including audio recordings of 50 speakers and TIMIT speech corpus including audio recordings of 630 speakers. The accuracy of the proposed features was assessed using different types of classifiers. Direct use of these features did not increase the accuracy compared to the use of traditional spectral speech features, such as mel-frequency cepstral coefficients. However, the use of these features in the classifiers ensemble made it possible to achieve a reduction of the equal error rate to 0.21% on 50-speaker speech corpus and to 0.23% on the TIMIT speech corpus.
引用
收藏
页码:596 / +
页数:12
相关论文
共 31 条
[1]  
[Anonymous], 2013, Speech and Language Processing Technical Committee Newsletter
[2]  
[Anonymous], 2009, P ANN INT C MACH LEA, DOI DOI 10.1145/1553374.1553453
[3]  
[Anonymous], 2014, P SPEAK LANG REC WOR
[4]   Speaker recognition: A tutorial [J].
Campbell, JP .
PROCEEDINGS OF THE IEEE, 1997, 85 (09) :1437-1462
[5]  
Chorowski J, 2015, ADV NEUR IN, V28
[6]   COMPARISON OF PARAMETRIC REPRESENTATIONS FOR MONOSYLLABIC WORD RECOGNITION IN CONTINUOUSLY SPOKEN SENTENCES [J].
DAVIS, SB ;
MERMELSTEIN, P .
IEEE TRANSACTIONS ON ACOUSTICS SPEECH AND SIGNAL PROCESSING, 1980, 28 (04) :357-366
[7]  
Eyben F., 2013, P 21 ACM INT C MULT, P835, DOI [10.1145/2502081.2502224, DOI 10.1145/2502081.2502224]
[8]  
Greenberg Craig S, 2014, ODYSSEY, P224
[9]   Training products of experts by minimizing contrastive divergence [J].
Hinton, GE .
NEURAL COMPUTATION, 2002, 14 (08) :1771-1800
[10]   A fast learning algorithm for deep belief nets [J].
Hinton, Geoffrey E. ;
Osindero, Simon ;
Teh, Yee-Whye .
NEURAL COMPUTATION, 2006, 18 (07) :1527-1554