Parallel Training of Neural Networks for Speech Recognition

被引:0
作者
Vesely, Karel [1 ]
Burget, Lukas [1 ]
Grezl, Frantisek [1 ]
机构
[1] Brno Univ Technol, Speech FIT, Brno 61266, Czech Republic
来源
TEXT, SPEECH AND DIALOGUE | 2010年 / 6231卷
关键词
neural network; phoneme classification; posterior features; backpropagation training; data parallelization;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The feed-forward multi-layer neural networks have significant importance in speech recognition. A new parallel-training tool TNet was designed and optimized for multiprocessor computers. The training acceleration rates are reported on a phoneme-state classification task.
引用
收藏
页码:439 / 446
页数:8
相关论文
共 6 条
  • [1] [Anonymous], 2002, CISLICOVA FILTRACE A
  • [2] [Anonymous], 2006, Pattern recognition and machine learning
  • [3] Karafiat M., 2007, LNCS, P275
  • [4] Kontar S., 2006, THESIS FIT VUT BRNO
  • [5] Pethick M., 2003, P INT C PAR DISTR CO, V392, P165
  • [6] Szoke I., 2005, P INT 2005 EUR