Indefinite Linearized Augmented Lagrangian Method for Convex Programming with Linear Inequality Constraints

被引:0
作者
He, Bingsheng [1 ]
Xu, Shengjie [2 ,3 ]
Yuan, Jing [4 ]
机构
[1] Nanjing Univ, Dept Math, Nanjing, Peoples R China
[2] Harbin Inst Technol, Dept Math, Harbin, Peoples R China
[3] Southern Univ Sci & Technol, Dept Math, Shenzhen, Peoples R China
[4] Zhejiang Normal Univ, Coll Math Med, Jinhua, Peoples R China
关键词
Augmented Lagrangian method; Convex programming; Convergence analysis; Inequality constraints; Image segmentation; CONTINUOUS MAX-FLOW; ALGORITHMS; FRAMEWORK;
D O I
10.1007/s10013-024-00712-z
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
The augmented Lagrangian method (ALM) is a benchmark for convex programming problems with linear constraints; ALM and its variants for linearly equality-constrained convex minimization models have been well studied in the literature. However, much less attention has been paid to ALM for efficiently solving linearly inequality-constrained convex minimization models. In this paper, we exploit an enlightening reformulation of the newly developed indefinite linearized ALM for the equality-constrained convex optimization problem, and present a new indefinite linearized ALM scheme for efficiently solving the convex optimization problem with linear inequality constraints. The proposed method enjoys great advantages, especially for large-scale optimization cases, in two fields mainly: first, it largely simplifies the challenging key subproblem of the classic ALM by employing its linearized reformulation, while keeping low complexity in computation; second, we show that only a smaller proximity regularization term is needed for provable convergence, which allows a bigger step-size and hence significantly better performance. Moreover, we show the global convergence of the proposed scheme upon its equivalent compact expression of prediction-correction, along with a worst-case O(1/N)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal {O}(1/N)$$\end{document} convergence rate. Numerical results on some application problems demonstrate that a smaller regularization term can lead to a better experimental performance, which further confirms the theoretical results presented in this study.
引用
收藏
页数:26
相关论文
共 46 条
  • [1] A New Insight on Augmented Lagrangian Method with Applications in Machine Learning
    Bai, Jianchao
    Jia, Linyuan
    Peng, Zheng
    [J]. JOURNAL OF SCIENTIFIC COMPUTING, 2024, 99 (02)
  • [2] Bazaraa M.S, 1993, NONLINEAR PROGRAMMIN
  • [3] Beck A, 2017, MOS-SIAM Series on Optimization, P1, DOI DOI 10.1137/1.9781611974997
  • [4] Bertsekas D.P., 1996, Constrained optimization and Lagrange multiplier methods
  • [5] Bertsekas Dimitri P., 2015, Convex Optimization Algorithms
  • [6] Birgin EG, 2014, FUND ALGORITHMS, P1, DOI 10.1137/1.9781611973365
  • [7] Boyd S, 2004, CONVEX OPTIMIZATION, DOI 10.1017/CBO9780511804441
  • [8] A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging
    Chambolle, Antonin
    Pock, Thomas
    [J]. JOURNAL OF MATHEMATICAL IMAGING AND VISION, 2011, 40 (01) : 120 - 145
  • [9] Algorithms for finding global minimizers of image segmentation and denoising models
    Chan, Tony F.
    Esedoglu, Selim
    Nikolova, Mila
    [J]. SIAM JOURNAL ON APPLIED MATHEMATICS, 2006, 66 (05) : 1632 - 1648
  • [10] CORTES C, 1995, MACH LEARN, V20, P273, DOI 10.1023/A:1022627411411