Path following in the exact penalty method of convex programming

被引:0
作者
Hua Zhou
Kenneth Lange
机构
[1] North Carolina State University,Department of Statistics
[2] Human Genetics and Statistics,Departments of Biomathematics
[3] University of California,undefined
来源
Computational Optimization and Applications | 2015年 / 61卷
关键词
Constrained convex optimization; Exact penalty; Geometric programming; Ordinary differential equation; Quadratically constrained quadratic programming; Regularization; Semidefinite programming; 65K05; 90C25;
D O I
暂无
中图分类号
学科分类号
摘要
Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\infty $$\end{document}, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to (a) projection onto a convex set, (b) nonnegative least squares, (c) quadratically constrained quadratic programming, (d) geometric programming, and (e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value.
引用
收藏
页码:609 / 634
页数:25
相关论文
共 57 条
  • [1] Forsgren A(2002)Interior methods for nonlinear optimization SIAM Rev. 44 525-597
  • [2] Gill PE(1967)Non-linear programming via penalty functions Manag. Sci. 13 344-358
  • [3] Wright MH(2013)A path algorithm for constrained estimation J. Comput. Gr. Stat. 22 261-283
  • [4] Zangwill WI(1986)Numerical linear algebra aspects of globally convergent homotopy methods SIAM Rev. 28 529-545
  • [5] Zhou H(2014)A generic path algorithm for regularized statistical estimation J. Am. Statist. Assoc. 109 686-699
  • [6] Lange K(2004)Least angle regression Ann. Stat. 32 407-499
  • [7] Watson LT(2000)A new approach to variable selection in least squares problems IMA J. Numer. Anal. 20 389-403
  • [8] Zhou H(2011)The solution path of the generalized lasso Ann. Stat. 39 1335-1371
  • [9] Wu Y(1983)An algorithm for restricted least squares regression J. Am. Stat. Assoc. 78 837-842
  • [10] Efron B(2007)Algorithms and applications for approximate nonnegative matrix factorization Comput. Statist. Data Anal. 52 155-173