Deep Activation Mixture Model for Speech Recognition

被引:1
作者
Wu, Chunyang [1 ]
Gales, Mark J. F. [1 ]
机构
[1] Univ Cambridge, Dept Engn, Trumpington St, Cambridge CB2 1PZ, England
来源
18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION | 2017年
关键词
deep learning; mixture model; speaker adaptation; NEURAL-NETWORK; ADAPTATION;
D O I
10.21437/Interspeech.2017-1233
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning approaches achieve state-of-the-art performance in a range of applications. including speech recognition. However, the parameters of the deep neural network (DNN) are hard to interpret, which makes regularisation and adaptation to speaker or acoustic conditions challenging. This paper proposes the deep activation mixture model (DAMM) to address these problems. The output of one hidden layer is modelled as the sum of a mixture and residual models. The mixture model forms an activation function contour while the residual one models fluctuations around the contour. The use of the mixture model gives two advantages: First. it introduces a novel regularisation on the DNN. Second, it allows novel adaptation schemes. The proposed approach is evaluated on a large-vocabulary U.S. English broadcast news task. It yields a slightly better performance than the DNN baselines. and on the utterance-level unsupervised adaptation, the adapted DAMM acquires further performance gains.
引用
收藏
页码:1611 / 1615
页数:5
相关论文
共 29 条
[1]  
Abdel-Hamid O, 2013, INTERSPEECH, P1247
[2]  
[Anonymous], 2014, Advances in Neural Information Processing Systems
[3]  
Bell P, 2015, INT CONF ACOUST SPEE, P4290, DOI 10.1109/ICASSP.2015.7178780
[4]  
Bishop, 1994, MIXTURE DENSITY NETW, DOI DOI 10.1007/978-3-322-81570-58
[5]  
Chen X, 2012, 13TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2012 (INTERSPEECH 2012), VOLS 1-3, P26
[6]  
Collobert R., 2008, P 25 ICML, P160, DOI [10.1145/1390156.1390177, DOI 10.1145/1390156.1390177]
[7]   Context-Dependent Pre-Trained Deep Neural Networks for Large-Vocabulary Speech Recognition [J].
Dahl, George E. ;
Yu, Dong ;
Deng, Li ;
Acero, Alex .
IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2012, 20 (01) :30-42
[8]  
Dai L., 2016, J MACH LEARN RES, V17, P1
[9]  
Delcroix M, 2015, INT CONF ACOUST SPEE, P4535, DOI 10.1109/ICASSP.2015.7178829
[10]   Linear hidden transformations for adaptation of hybrid ANN/HMM models [J].
Gemello, Roberto ;
Mana, Franco ;
Scanzio, Stefano ;
Laface, Pietro ;
De Mori, Renato .
SPEECH COMMUNICATION, 2007, 49 (10-11) :827-835