Subgradient Methods for Saddle-Point Problems

被引:0
作者
A. Nedić
A. Ozdaglar
机构
[1] University of Illinois at Urbana-Champaign,Department of Industrial and Enterprise Systems Engineering
[2] Massachusetts Institute of Technology,Department of Electrical Engineering and Computer Science
来源
Journal of Optimization Theory and Applications | 2009年 / 142卷
关键词
Saddle-point subgradient methods; Averaging; Approximate primal solutions; Primal-dual subgradient methods; Convergence rate;
D O I
暂无
中图分类号
学科分类号
摘要
We study subgradient methods for computing the saddle points of a convex-concave function. Our motivation comes from networking applications where dual and primal-dual subgradient methods have attracted much attention in the design of decentralized network protocols. We first present a subgradient algorithm for generating approximate saddle points and provide per-iteration convergence rate estimates on the constructed solutions. We then focus on Lagrangian duality, where we consider a convex primal optimization problem and its Lagrangian dual problem, and generate approximate primal-dual optimal solutions as approximate saddle points of the Lagrangian function. We present a variation of our subgradient method under the Slater constraint qualification and provide stronger estimates on the convergence rate of the generated primal sequences. In particular, we provide bounds on the amount of feasibility violation and on the primal objective function values at the approximate solutions. Our algorithm is particularly well-suited for problems where the subgradient of the dual function cannot be evaluated easily (equivalently, the minimum of the Lagrangian function at a dual solution cannot be computed efficiently), thus impeding the use of dual subgradient methods.
引用
收藏
页码:205 / 228
页数:23
相关论文
共 36 条
[11]  
Maistroskii D.(2004)Prox-method with rate of convergence SIAM J. Optim. 15 229-251
[12]  
Zabotin I.Y.(2006)(1/ SIAM J. Optim. 16 697-725
[13]  
Korpelevich G.M.(2007)) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems Math. Program., Ser. B 62 261-271
[14]  
Kallio M.(1993)Interior gradient and proximal methods for convex and conic optimization Math. Program. 12 109-138
[15]  
Rosa C.H.(2001)Projected subgradient methods with non-Euclidean distances for nondifferentiable convex minimization and variational inequalities SIAM J. Optim. 19 1757-1780
[16]  
Beck A.(2009)Convergence of some algorithms for convex minimization SIAM J. Optim. 19 105-113
[17]  
Teboulle M.(1996)Incremental subgradient methods for nondifferentiable optimization Oper. Res. Lett. 9 93-120
[18]  
Nemirovski A.S.(1998)Approximate primal solutions and rate analysis for dual subgradient methods Optim. Methods Soft. 86 283-312
[19]  
Auslender A.(1999)Recovery of primal solutions when using subgradient optimization methods to solve Lagrangian duals of linear programs Math. Program. undefined undefined-undefined
[20]  
Teboulle M.(undefined)Ergodic convergence in subgradient optimization undefined undefined undefined-undefined