On R-linear convergence analysis for a class of gradient methods

被引:0
作者
Na Huang
机构
[1] China Agricultural University,Department of Applied Mathematics, College of Science
来源
Computational Optimization and Applications | 2022年 / 81卷
关键词
Quadratic optimization; Gradient methods; Spectral algorithms; -linear convergence analysis; -factor; 65K05; 90C06; 90C30;
D O I
暂无
中图分类号
学科分类号
摘要
Gradient method is a simple optimization approach using minus gradient of the objective function as a search direction. Its efficiency highly relies on the choices of the stepsize. In this paper, the convergence behavior of a class of gradient methods, where the stepsize has an important property introduced in (Dai in Optimization 52:395–415, 2003), is analyzed. Our analysis is focused on minimization on strictly convex quadratic functions. We establish the R-linear convergence and derive an estimate for the R-factor. Specifically, if the stepsize can be expressed as a collection of Rayleigh quotient of the inverse Hessian matrix, we are able to show that these methods converge R-linearly and their R-factors are bounded above by 1-1ϰ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1-\frac{1}{\varkappa }$$\end{document}, where ϰ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varkappa$$\end{document} is the associated condition number. Preliminary numerical results demonstrate the tightness of our estimate of the R-factor.
引用
收藏
页码:161 / 177
页数:16
相关论文
共 51 条
  • [1] Akaike H(1959)On a successive transformation of probability distribution and its application to the analysis of the optimum gradient method Ann. Inst. Stat. Math. Tokyo 11 1-17
  • [2] Barzilai J(1988)Two-point step size gradient methods IMA J. Numer. Anal. 8 141-148
  • [3] Borwein JM(2019)Stabilized Barzilai-Borwein method J. Comp. Math. 37 916-936
  • [4] Burdakov O(1847)Méthode générale pour la résolution des systemes d’équations simultanées Comp. Rend. Sci. Paris 25 536-538
  • [5] Dai Y-H(2018)-linear convergence of limited memory steepest descent IMA J. Numer. Anal. 38 720-742
  • [6] Huang N(2003)Alternate step gradient method Optimization 52 395-415
  • [7] Cauchy A(2013)A new analysis on the Barzilai-Borwein gradient method J. Oper. Res. Soc. China 1 187-198
  • [8] Curtis FE(2005)On the asymptotic behaviour of some new gradient methods Math. Program. 103 541-559
  • [9] Guo W(2005)Projected Barzilai-Borwein methods for large-scale box-constrained quadratic programming Numer. Math. 100 21-47
  • [10] Dai Y-H(2006)The cyclic Barzilai-Borwein method for unconstrained optimization IMA J. Numer. Anal. 26 604-627