AN AUGMENTED LAGRANGIAN BASED ALGORITHM FOR DISTRIBUTED NONCONVEX OPTIMIZATION

被引:133
作者
Houska, Boris [1 ]
Frasch, Janick [2 ]
Diehl, Moritz [3 ,4 ]
机构
[1] ShanghaiTech Univ, Sch Informat Sci & Technol, 319 Yueyang Rd, Shanghai 200031, Peoples R China
[2] Univ Magdeburg, Fac Math, Univ Pl 2, D-39106 Magdeburg, Germany
[3] Univ Freiburg, Dept Microsyst Engn IMTEK, Georges Koehler Allee 102, D-79110 Freiburg, Germany
[4] Univ Freiburg, Dept Math, Georges Koehler Allee 102, D-79110 Freiburg, Germany
基金
中国国家自然科学基金; 欧洲研究理事会;
关键词
nonconvex optimization; large-scale problems; distributed algorithms; PRIMAL DUAL DECOMPOSITION; MODEL-PREDICTIVE CONTROL; ACTIVE-SET STRATEGY; CONSTRAINED OPTIMIZATION; SYSTEM OPTIMIZATION; CONVEX-OPTIMIZATION; PENALTY-FUNCTION; SQP ALGORITHM; CONVERGENCE; MULTIPLIERS;
D O I
10.1137/140975991
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
This paper is about distributed derivative-based algorithms for solving optimization problems with a separable (potentially nonconvex) objective function and coupled affine constraints. A parallelizable method is proposed that combines ideas from the fields of sequential quadratic programming and augmented Lagrangian algorithms. The method negotiates shared dual variables that may be interpreted as prices, a concept employed in dual decomposition methods and the alternating direction method of multipliers (ADMM). Here, each agent solves its own small-scale nonlinear programming problem and communicates with other agents by solving coupled quadratic programming problems. These coupled quadratic programming problems have equality constraints for which parallelizable methods are available. The use of techniques associated with standard sequential quadratic programming methods gives a method with superlinear or quadratic convergence rate under suitable conditions. This is in contrast to existing decomposition methods, such as ADMM, which have a linear convergence rate. It is shown how the proposed algorithm may be extended using globalization techniques that guarantee convergence to a local minimizer from any initial starting point.
引用
收藏
页码:1101 / 1127
页数:27
相关论文
共 73 条
  • [1] ON AUGMENTED LAGRANGIAN METHODS WITH GENERAL LOWER-LEVEL CONSTRAINTS
    Andreani, R.
    Birgin, E. G.
    Martinez, J. M.
    Schuverdt, M. L.
    [J]. SIAM JOURNAL ON OPTIMIZATION, 2008, 18 (04) : 1286 - 1309
  • [2] [Anonymous], 1978, Nonlinear programming
  • [3] [Anonymous], 1999, Athena scientific Belmont
  • [4] [Anonymous], 2019, Reinforcement Learning and Optimal Control
  • [5] [Anonymous], 1970, PRINCETON MATH SER
  • [6] [Anonymous], FDN TRENDS OPTIM, DOI DOI 10.1561/2400000003
  • [7] [Anonymous], 1958, Studies in Linear and Nonlinear Programming
  • [8] [Anonymous], J OPTIM THEORY APPL
  • [9] Bertsekas D. P., 1989, PARALLEL DISTRIBUTED, V23
  • [10] CONVEXIFICATION PROCEDURES AND DECOMPOSITION METHODS FOR NON-CONVEX OPTIMIZATION PROBLEMS
    BERTSEKAS, DP
    [J]. JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 1979, 29 (02) : 169 - 197