A new gradient method with an optimal stepsize property

被引:21
作者
Dai, YH [1 ]
Yang, XQ
机构
[1] Chinese Acad Sci, Acad Math& Syst Sci, Inst Computat Math & Sci Engn Comp, State Key Lab Sci & Engn Comp, Beijing 100080, Peoples R China
[2] Hong Kong Polytech Univ, Dept Appl Math, Kowloon, Hong Kong, Peoples R China
关键词
linear system; gradient method; steepest descent method; (shifted) power method;
D O I
10.1007/s10589-005-5959-2
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
The gradient method for the symmetric positive definite linear system Ax = b is as follows x(k+1) = x(k) - alpha(k)g(k) where g(k) = Ax(k) - b is the residual of the system at x(k) and alpha(k) is the stepsize. The stepsize alpha(k) = 2/lambda(1)+lambda(n) is optimal in the sense that it minimizes the modulus parallel to I - alpha A parallel to(2), where.1 and.n are the minimal and maximal eigenvalues of A respectively. Since lambda(1) and lambda(n) are unknown to users, it is usual that the gradient method with the optimal stepsize is only mentioned in theory. In this paper, we will propose a new stepsize formula which tends to the optimal stepsize as k -> infinity. At the same time, the minimal and maximal eigenvalues,.1 and.n, of A and their corresponding eigenvectors can be obtained.
引用
收藏
页码:73 / 88
页数:16
相关论文
共 12 条