LINEAR CONVERGENCE OF DESCENT METHODS FOR THE UNCONSTRAINED MINIMIZATION OF RESTRICTED STRONGLY CONVEX FUNCTIONS

被引:15
作者
Schoepfer, Frank [1 ]
机构
[1] Carl von Ossietzky Univ Oldenburg, Inst Math, D-26111 Oldenburg, Germany
关键词
linear convergence; restricted strong convexity; error bound; quadratic splines; BFGS; conjugate gradient; CONJUGATE-GRADIENT METHOD; QUASI-NEWTON METHODS; MEMORY BFGS METHOD; GLOBAL CONVERGENCE; ERROR-BOUNDS; QUADRATIC PROGRAMS; NONSMOOTH ANALYSIS; BREGMAN ITERATION; SINGULAR-VALUES; ALGORITHM;
D O I
10.1137/140992990
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Linear convergence rates of descent methods for unconstrained minimization are usually proved under the assumption that the objective function is strongly convex. Recently it was shown that the weaker assumption of restricted strong convexity suffices for linear convergence of the ordinary gradient descent method. A decisive difference to strong convexity is that the set of minimizers of a restricted strongly convex function need be neither a singleton nor bounded. In this paper we extend the linear convergence results under this weaker assumption to a larger class of descent methods including restarted nonlinear CG, BFGS, and its damped limited memory variants, L-D-BFGS. For twice continuously differentiable objective functions we even obtain r-step superlinear convergence for the CG DESCENT conjugate gradient method of Hager and Zhang, where r is greater than or equal to the rank of the Hessian at a minimizer. This is remarkable since the Hessian of a restricted strongly convex function need not have full rank. Furthermore we show that convex quadratic splines and objective functions of the unconstrained duals to some linearly constrained optimization problems are restricted strongly convex. In particular this holds for the regularized basis pursuit problem and its analogues for nuclear norm minimization and principal component pursuit.
引用
收藏
页码:1883 / 1911
页数:29
相关论文
共 66 条
[1]  
Al-Baali M., 2009, ADV MODEL OPTIM, V11, P63
[2]   Damped Techniques for the Limited Memory BFGS Method for Large-Scale Optimization [J].
Al-Baali, Mehiddin ;
Grandinetti, Lucio ;
Pisacane, Ornella .
JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2014, 161 (02) :688-699
[3]  
[Anonymous], 2013, Advances in Neural Information Processing Systems
[4]   Strong conical hull intersection property, bounded linear regularity, Jameson's property (G), and error bounds in convex optimization [J].
Bauschke, HH ;
Borwein, JM ;
Li, W .
MATHEMATICAL PROGRAMMING, 1999, 86 (01) :135-160
[5]   GLOBAL CONVERGENCE OF A CLASS OF QUASI-NEWTON METHODS ON CONVEX PROBLEMS [J].
BYRD, RH ;
NOCEDAL, J ;
YUAN, YX .
SIAM JOURNAL ON NUMERICAL ANALYSIS, 1987, 24 (05) :1171-1190
[6]   A TOOL FOR THE ANALYSIS OF QUASI-NEWTON METHODS WITH APPLICATION TO UNCONSTRAINED MINIMIZATION [J].
BYRD, RH ;
NOCEDAL, J .
SIAM JOURNAL ON NUMERICAL ANALYSIS, 1989, 26 (03) :727-739
[7]   A SINGULAR VALUE THRESHOLDING ALGORITHM FOR MATRIX COMPLETION [J].
Cai, Jian-Feng ;
Candes, Emmanuel J. ;
Shen, Zuowei .
SIAM JOURNAL ON OPTIMIZATION, 2010, 20 (04) :1956-1982
[8]   CONVERGENCE OF THE LINEARIZED BREGMAN ITERATION FOR l1-NORM MINIMIZATION [J].
Cai, Jian-Feng ;
Osher, Stanley ;
Shen, Zuowei .
MATHEMATICS OF COMPUTATION, 2009, 78 (268) :2127-2136
[9]   Robust Principal Component Analysis? [J].
Candes, Emmanuel J. ;
Li, Xiaodong ;
Ma, Yi ;
Wright, John .
JOURNAL OF THE ACM, 2011, 58 (03)
[10]   RANK-SPARSITY INCOHERENCE FOR MATRIX DECOMPOSITION [J].
Chandrasekaran, Venkat ;
Sanghavi, Sujay ;
Parrilo, Pablo A. ;
Willsky, Alan S. .
SIAM JOURNAL ON OPTIMIZATION, 2011, 21 (02) :572-596