An acceleration of gradient descent algorithm with backtracking for unconstrained optimization

被引:79
作者
Andrei, Neculai [1 ]
机构
[1] Ctr Adv Modeling & Optimizat, Res Inst Informat, Bucharest 1, Romania
关键词
acceleration methods; backtracking; gradient descent methods;
D O I
10.1007/s11075-006-9023-9
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
In this paper we introduce an acceleration of gradient descent algorithm with backtracking. The idea is to modify the steplength t(k) by means of a positive parameter theta(k) , in a multiplicative manner, in such a way to improve the behaviour of the classical gradient algorithm. It is shown that the resulting algorithm remains linear convergent, but the reduction in function value is significantly improved.
引用
收藏
页码:63 / 73
页数:11
相关论文
共 16 条
[1]  
ARMIJO L, 1966, PAC J MATH, V6, P1
[2]   CUTE - CONSTRAINED AND UNCONSTRAINED TESTING ENVIRONMENT [J].
BONGARTZ, I ;
CONN, AR ;
GOULD, N ;
TOINT, PL .
ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE, 1995, 21 (01) :123-160
[3]  
Cauchy A, 1848, CR ACAD SCI PAR, V25, P536
[4]  
DENNIS JE, 1983, NUMERICAL METHODS UN
[5]   Benchmarking optimization software with performance profiles [J].
Dolan, ED ;
Moré, JJ .
MATHEMATICAL PROGRAMMING, 2002, 91 (02) :201-213
[6]  
Fletcher R., 1981, PRACTICAL METHODS OP
[7]  
FORSYTHE GE, 1951, B AM MATH SOC, V57, P183
[8]  
Goldstein AA., 1965, J. SIAM, V3, P147, DOI DOI 10.1137/0303013
[9]  
HUMPHREY WE, 1966, RECENT ADV OPTIMIZAT
[10]  
LEMARE\CHAL C., 1981, OPTIMIZATION OPTIMAL, P59, DOI DOI 10.1007/BFB0004506