Proximal alternating penalty algorithms for nonsmooth constrained convex optimization

被引:0
作者
Quoc Tran-Dinh
机构
[1] University of North Carolina at Chapel Hill (UNC-Chapel Hill),Department of Statistics and Operations Research
来源
Computational Optimization and Applications | 2019年 / 72卷
关键词
Proximal alternating algorithm; Quadratic penalty method; Accelerated scheme; Constrained convex optimization; First-order methods; Convergence rate; 90C25; 90-08;
D O I
暂无
中图分类号
学科分类号
摘要
We develop two new proximal alternating penalty algorithms to solve a wide range class of constrained convex optimization problems. Our approach mainly relies on a novel combination of the classical quadratic penalty, alternating minimization, Nesterov’s acceleration, adaptive strategy for parameters. The first algorithm is designed to solve generic and possibly nonsmooth constrained convex problems without requiring any Lipschitz gradient continuity or strong convexity, while achieving the best-known O1k\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal {O}\left( \frac{1}{k}\right) $$\end{document}-convergence rate in a non-ergodic sense, where k is the iteration counter. The second algorithm is also designed to solve non-strongly convex, but semi-strongly convex problems. This algorithm can achieve the best-known O1k2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal {O}\left( \frac{1}{k^2}\right) $$\end{document}-convergence rate on the primal constrained problem. Such a rate is obtained in two cases: (1) averaging only on the iterate sequence of the strongly convex term, or (2) using two proximal operators of this term without averaging. In both algorithms, we allow one to linearize the second subproblem to use the proximal operator of the corresponding objective term. Then, we customize our methods to solve different convex problems, and lead to new variants. As a byproduct, these algorithms preserve the same convergence guarantees as in our main algorithms. We verify our theoretical development via different numerical examples and compare our methods with some existing state-of-the-art algorithms.
引用
收藏
页码:1 / 43
页数:42
相关论文
共 72 条
  • [1] Auslender A(2006)Interior gradient and proximal methods for convex and conic optimization SIAM J. Optim. 16 697-725
  • [2] Teboulle M(2009)A fast iterative shrinkage-thresholding agorithm for linear inverse problems SIAM J. Imaging Sci. 2 183-202
  • [3] Beck A(2011)Templates for convex cone problems with applications to sparse signal recovery Math. Program. Comput. 3 165-218
  • [4] Teboulle M(2011)Square-root LASSO: pivotal recovery of sparse signals via conic programming Biometrika 94 791-806
  • [5] Becker S(2011)Distributed optimization and statistical learning via the alternating direction method of multipliers Found. Trends Mach. Learn. 3 1-122
  • [6] Candès EJ(2011)A first-order primal-dual algorithm for convex problems with applications to imaging J. Math. Imaging Vis. 40 120-145
  • [7] Grant M(2016)On the ergodic convergence rates of a first-order primal–dual algorithm Math. Program. 159 253-287
  • [8] Belloni A(1994)A proximal-based decomposition method for convex minimization problems Math. Program. 64 81-101
  • [9] Chernozhukov V(2013)A primal–dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms J. Optim. Theory Appl. 158 460-479
  • [10] Wang L(2015)Convergence rate analysis of primal–dual splitting schemes SIAM J. Optim. 25 1912-1943