AN ACCELERATED LEARNING ALGORITHM FOR MULTILAYER PERCEPTRONS - OPTIMIZATION LAYER-BY-LAYER

被引:87
作者
ERGEZINGER, S [1 ]
THOMSEN, E [1 ]
机构
[1] UNIV HANNOVER,INST ALLGEMEINE NACHRICHTENTECHN,HANNOVER,GERMANY
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 1995年 / 6卷 / 01期
关键词
D O I
10.1109/72.363452
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multilayer perceptrons are successfully used in an increasing number of nonlinear signal processing applications. The backpropagation learning algorithm, or variations hereof, is the standard method applied to the nonlinear optimization problem of adjusting the weights in the network in order to minimize a given cost function. However, backpropagation as a steepest descent approach is too slow for many applications. In this paper a new learning procedure is presented which is based on a linearization of the nonlinear processing elements and the optimization of the multilayer perceptron layer by layer. In order to limit the introduced linearization error a penalty term is added to the cost function. The new learning algorithm is applied to the problem of nonlinear prediction of chaotic time series. The proposed algorithm yields results in both accuracy and convergence rates which are orders of magnitude superior compared to conventional backpropagation learning.
引用
收藏
页码:31 / 42
页数:12
相关论文
共 31 条
  • [1] [Anonymous], 1980, UNCONSTRAINED OPTIMI
  • [2] [Anonymous], 2012, PRACTICAL NUMERICAL
  • [3] [Anonymous], LINEAR NONLINEAR PRO
  • [4] BARMANN F, 1992, NEURAL NET, V5
  • [5] BATTITI R, UNPUB IEEE T NEURAL
  • [6] Battiti R., 1989, COMPLEX SYSTEMS, V3, P331
  • [7] BIEGLERKONIG F, 1992, ARTIFICIAL NEURAL NE, V2
  • [8] Cybenko G., 1989, Mathematics of Control, Signals, and Systems, V2, P303, DOI 10.1007/BF02551274
  • [9] DAY SD, 1991, UNPUB IEEE T NEU AUG
  • [10] ECKMANN JP, 1985, REV MOD PHYS 1, V57