The Proximal Augmented Lagrangian Method for Nonsmooth Composite Optimization

被引:79
作者
Dhingra, Neil K. [1 ]
Khong, Sei Zhen [2 ]
Jovanovic, Mihailo R. [3 ]
机构
[1] Numerica Corp, Ft Collins, CO 80528 USA
[2] Univ Hong Kong, Dept Elect & Elect Engn, Pokfulam, Hong Kong, Peoples R China
[3] Univ Southern Calif, Dept Elect Engn, Los Angeles, CA 90089 USA
关键词
Augmented Lagrangian; control for optimization; global exponential stability; method of multipliers; non-smooth optimization; primal-dual dynamics; proximal algorithms; proximal augmented Lagrangian; regularization for design; structured optimal control; ALGORITHM; DYNAMICS; CONVERGENCE; STABILITY;
D O I
10.1109/TAC.2018.2867589
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We study a class of optimization problems in which the objective function is given by the sum of a differentiable but possibly nonconvex component and a nondifferentiable convex regularization term. We introduce an auxiliary variable to separate the objective function components and utilize the Moreau envelope of the regularization term to derive the proximal augmented Lagrangian- a continuously differentiable function obtained by constraining the augmented Lagrangian to the manifold that corresponds to the explicit minimization over the variable in the nonsmooth term. The continuous differentiability of this function with respect to both primal and dual variables allows us to leverage the method of multipliers (MM) to compute optimal primal-dual pairs by solving a sequence of differentiable problems. The MM algorithm is applicable to a broader class of problems than proximal gradient methods and it has stronger convergence guarantees and a more refined step-size update rules than the alternating direction method of multipliers (ADMM). These features make it an attractive option for solving structured optimal control problems. We also develop an algorithm based on the primal-descent dual-ascent gradient method and prove global (exponential) asymptotic stability when the differentiable component of the objective function is (strongly) convex and the regularization term is convex. Finally, we identify classes of problems for which the primal-dual gradient flow dynamics are convenient for distributed implementation and compare/contrast our framework to the existing approaches.
引用
收藏
页码:2861 / 2868
页数:8
相关论文
共 42 条
  • [1] [Anonymous], 1999, NONLINEAR PROGRAMMIN
  • [2] Arrow K., 1958, Studies in Linear and Non-Linear Programming, V2
  • [3] Distributed control of spatially invariant systems
    Bamieh, B
    Paganini, F
    Dahleh, MA
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2002, 47 (07) : 1091 - 1107
  • [4] A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
    Beck, Amir
    Teboulle, Marc
    [J]. SIAM JOURNAL ON IMAGING SCIENCES, 2009, 2 (01): : 183 - 202
  • [5] Bertsekas D., 2019, Reinforcement Learning and Optimal Control
  • [6] Regularized estimation of large covariance matrices
    Bickel, Peter J.
    Levina, Elizaveta
    [J]. ANNALS OF STATISTICS, 2008, 36 (01) : 199 - 227
  • [7] Bonnans Joseph-Frederic., 2013, Numerical optimization: theoretical and practical aspects
  • [8] Distributed optimization and statistical learning via the alternating direction method of multipliers
    Boyd S.
    Parikh N.
    Chu E.
    Peleato B.
    Eckstein J.
    [J]. Foundations and Trends in Machine Learning, 2010, 3 (01): : 1 - 122
  • [9] The Role of Convexity in Saddle-Point Dynamics: Lyapunov Function and Robustness
    Cherukuri, Ashish
    Mallada, Enrique
    Low, Steven
    Cortes, Jorge
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2018, 63 (08) : 2449 - 2464
  • [10] SADDLE-POINT DYNAMICS: CONDITIONS FOR ASYMPTOTIC STABILITY OF SADDLE POINTS
    Cherukuri, Ashish
    Gharesifard, Bahman
    Cortes, Jorge
    [J]. SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2017, 55 (01) : 486 - 511