Improving the convergence of the backpropagation algorithm using learning rate adaptation methods

被引:99
作者
Magoulas, GD [1 ]
Vrahatis, MN
Androulakis, GS
机构
[1] Univ Athens, Dept Informat, GR-15771 Athens, Greece
[2] Univ Patras, Artificial Intelligence Res Ctr, GR-26110 Patras, Greece
[3] Univ Patras, Dept Math, GR-26110 Patras, Greece
关键词
D O I
10.1162/089976699300016223
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This article focuses on gradient-based backpropagation algorithms that use either a common adaptive learning rate for all weights or an individual adaptive learning rate for each weight and apply the Goldstein/Armijo line search. The learning-rate adaptation is based on descent techniques and estimates of the local Lipschitz constant that are obtained without additional error function and gradient evaluations. The proposed algorithms improve the backpropagation training in terms of both convergence rate and convergence characteristics, such as stable learning and robustness to oscillations. Simulations are conducted to compare and evaluate the convergence behavior of these gradient-based training algorithms with several popular training methods.
引用
收藏
页码:1769 / 1796
页数:28
相关论文
共 53 条