A study of speech recognition based on RNN-RBM language model

被引:0
作者
Li, Yaxiong [1 ]
Zhang, Jianqiang [2 ]
Pan, Deng [3 ]
Hu, Dan [4 ]
机构
[1] Network Management Center, Hubei University of Science and Technology, Xianning, 437100, Hubei
[2] Info. Technol. Dept. of Learning Sci. & Technol. Virginia Polytechnic and State Univ., Blacksburg, 24061, VA
[3] School of Foreign Languages, Hubei University of Science and Technology, Xianning, 437100, Hubei
[4] School of Foreign Languages, Zhongnan University of Economics and Law, Wuhan
来源
Jisuanji Yanjiu yu Fazhan/Computer Research and Development | 2014年 / 51卷 / 09期
关键词
Language model; Neural network; Recurrent neural network-restricted Boltzmann machine; Relevance information; Speech recognition;
D O I
10.7544/issn1000-1239.2014.20140211
中图分类号
学科分类号
摘要
In the recent years, deep learning is emerging as a new way of multilayer neural networks and back propagation training. Its application in the field of language model, such as restricted Boltzmann machine language model, gets good results. This language model based on neural network can assess the probability of the next word appears according to the word sequence which is mapped to a continuous space. This language model can solve the problem of sparse data. Besides, some scholars are constructing language model making use of recurrent neural network mode in order to make full use of the preceding text to predict the next words. From these models we can sort out the restriction of long-distance dependency in language. This paper attempts to catch the long-distance information based on RNN-RBM. On the other hand, the dynamic adjunction of language model ia analyzed and illustrated according to the language features. The experimental result manifests there are considerable improvement to the efficiency of expanding vocabulary continuing speech recognition using RNN_RBM language model.
引用
收藏
页码:1936 / 1944
页数:8
相关论文
共 9 条
[1]  
Bengio Y., Ducharme R., Vincent P., Et al., A neural probabilistic language model , Journal of Machine Learning Research, 3, 2, pp. 1137-1155, (2003)
[2]  
Mikolov T., Kopecky J., Burget L., Et al., Neural network based language models for highly inflective languages , pp. 4725-4728, (2009)
[3]  
Boulanger-Lewandowski N., Bengio Y., Vincent P., Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription , pp. 590-598, (2012)
[4]  
Bottou L., Stochastic gradient learning in neural networks , pp. 687-699, (1991)
[5]  
Hinton G.E., Osindero S., Teh Y.W., A fast learningalgorithm for deep belief nets , Neural Computation, 18, 7, pp. 1527-1554, (2006)
[6]  
Bengio Y., Frasconi P., Simard P., The problem of learning long-term dependencies in recurrent networks , pp. 1183-1188, (1993)
[7]  
Xu W., Rundicky A., Can artificial neural networks learn language models? , (2000)
[8]  
Bengio Y., Simard P., Frasconi P., Learning long-term dependencies with gradient descent is difficult , pp. 157-166, (1994)
[9]  
Mikolov T., Kombrink S., Deoras A., Et al., RNNLM-Recurrent neural network language modeling toolkit , pp. 5528-5531, (2011)