Asymptotic Convergence of Backpropagation

被引:20
作者
Tesauro, Gerald [1 ]
He, Yu [2 ]
Ahmad, Subutai [2 ]
机构
[1] IBM Thomas J Watson Res Ctr, POB 704, Yorktown Hts, NY 10598 USA
[2] Univ Illinois Urbana Champaign, Ctr Complex Syst Res, Beckman Inst, Urbana, IL 61801 USA
关键词
D O I
10.1162/neco.1989.1.3.382
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We calculate analytically the rate of convergence at long times in the backpropagation learning algorithm for networks with and without hidden units. For networks without hidden units using the standard quadratic error function and a sigmoidal transfer function, we find that the error decreases as 1/t for large t, and the output states approach their target values as 1/root t. It is possible to obtain a different convergence rate for certain error and transfer functions, but the convergence can never be faster than 1/t. These results are unaffected by a momentum term in the learning algorithm, but convergence can be substantially improved by an adaptive learning rate scheme.& para;& para;For networks with hidden units, we generally expect the same rate of convergence to be obtained as in the single-layer case; however, under certain circumstances one can obtain a polynomial speed-up for nonsigmoidal units, or a logarithmic speed-up for sigmoidal units. Our analytic results are confirmed by empirical measurements of the convergence rate in numerical simulations.
引用
收藏
页码:382 / 391
页数:10
相关论文
共 9 条
[1]  
Ahmad S., 1988, P 1988 CONN MOD SUMM
[2]  
Ahmad S. N. S, 1988, THESIS
[3]  
Hinton G. E., 1987, CMUCS87115 DEP COMP
[4]  
Hoff, 1960, ADAPTIVE SWITCHING C, DOI 10.21236/AD0241531
[5]  
LeCun Y., 1985, P COGNITIVA, V85, P599
[6]  
Minsky M., 1969, PERCEPTRONS
[7]  
Parker D.B., 1985, TR47 MIT CTR COMP RE
[8]   LEARNING REPRESENTATIONS BY BACK-PROPAGATING ERRORS [J].
RUMELHART, DE ;
HINTON, GE ;
WILLIAMS, RJ .
NATURE, 1986, 323 (6088) :533-536
[9]  
Werbos P.J, 1974, REGRESSION NEW TOOLS