On memory gradient method with trust region for unconstrained optimization

被引:0
|
作者
Zhen-Jun Shi
Jie Shen
机构
[1] Qufu Normal University,College of Operations Research and Management
[2] University of Michigan,Department of Computer & Information Science
来源
Numerical Algorithms | 2006年 / 41卷
关键词
unconstrained optimization; memory gradient method; global convergence; 90C30; 49M37; 65K05;
D O I
暂无
中图分类号
学科分类号
摘要
In this paper we present a new memory gradient method with trust region for unconstrained optimization problems. The method combines line search method and trust region method to generate new iterative points at each iteration and therefore has both advantages of line search method and trust region method. It sufficiently uses the previous multi-step iterative information at each iteration and avoids the storage and computation of matrices associated with the Hessian of objective functions, so that it is suitable to solve large scale optimization problems. We also design an implementable version of this method and analyze its global convergence under weak conditions. This idea enables us to design some quick convergent, effective, and robust algorithms since it uses more information from previous iterative steps. Numerical experiments show that the new method is effective, stable and robust in practical computation, compared with other similar methods.
引用
收藏
页码:173 / 196
页数:23
相关论文
共 50 条