Fast Learning Algorithms for Feedforward Neural Networks

被引:0
作者
Minghu Jiang
Georges Gielen
Bo Zhang
Zhensheng Luo
机构
[1] Catholic University of Leuven,Department of Electrical Eng., MICAS
[2] Tsinghua University,Lab of Computational Linguistics, Department of Chinese Language
[3] Tsinghua University,State Key Lab of Intelligent Tech. & Systems, Department of Computer
来源
Applied Intelligence | 2003年 / 18卷
关键词
fast algorithm; error function; conjugate gradient; global convergence; feedforward neural networks;
D O I
暂无
中图分类号
学科分类号
摘要
In order to improve the training speed of multilayer feedforward neural networks (MLFNN), we propose and explore two new fast backpropagation (BP) algorithms obtained: (1) by changing the error functions, in case using the exponent attenuation (or bell impulse) function and the Fourier kernel function as alternative functions; and (2) by introducing the hybrid conjugate-gradient algorithm of global optimization for dynamic learning rate to overcome the conventional BP learning problems of getting stuck into local minima or slow convergence. Our experimental results demonstrate the effectiveness of the modified error functions since the training speed is faster than that of existing fast methods. In addition, our hybrid algorithm has a higher recognition rate than the Polak-Ribieve conjugate gradient and conventional BP algorithms, and has less training time, less complication and stronger robustness than the Fletcher-Reeves conjugate-gradient and conventional BP algorithms for real speech data.
引用
收藏
页码:37 / 54
页数:17
相关论文
共 42 条
[1]  
Wessels L.F.A(1992)Avoiding false local minima by proper initialization of connections IEEE Transactions on Neural Networks 3 899-905
[2]  
Barnard E.(1998)A modified back-propagation method to avoid false local minima Neural Networks 11 1059-1072
[3]  
Fukuoka Y.(1997)High-order and multilayer perceptrons initialization IEEE Transactions on Neural Networks 8 349-359
[4]  
Matsuki H.(1997)Three methods to speed up the training of feedforward and feedback perceptrons Neural Networks 10 1435-144
[5]  
Minamitani H.(1997)Fast training of multilayer perceptrons IEEE Transactions on Neural Networks 8 1314-1320
[6]  
Thimm G.(1994)An accelerated learning algorithm for multilayer perceptron networks IEEE Transactions on Neural Networks 5 493-497
[7]  
Fiesler E.(1994)Improving back propagation with a new error function Neural Networks 7 1191-1192
[8]  
Stager F.(1992)Improving the convergence of the back-propagation algorithm Neural Networks 5 465-471
[9]  
Agarwal M.(1992)Fast learning algorithms for neural networks IEEE Transactions on Ciruits and Systems-II: Analog and Digital Signal Processing 39 453-474
[10]  
Verma B.(1997)Improving the error backpropagation algorithm with a modified error function IEEE Transactions on Neural Networks 8 799-802