Subgradient Methods for Saddle-Point Problems

被引:0
作者
A. Nedić
A. Ozdaglar
机构
[1] University of Illinois at Urbana-Champaign,Department of Industrial and Enterprise Systems Engineering
[2] Massachusetts Institute of Technology,Department of Electrical Engineering and Computer Science
来源
Journal of Optimization Theory and Applications | 2009年 / 142卷
关键词
Saddle-point subgradient methods; Averaging; Approximate primal solutions; Primal-dual subgradient methods; Convergence rate;
D O I
暂无
中图分类号
学科分类号
摘要
We study subgradient methods for computing the saddle points of a convex-concave function. Our motivation comes from networking applications where dual and primal-dual subgradient methods have attracted much attention in the design of decentralized network protocols. We first present a subgradient algorithm for generating approximate saddle points and provide per-iteration convergence rate estimates on the constructed solutions. We then focus on Lagrangian duality, where we consider a convex primal optimization problem and its Lagrangian dual problem, and generate approximate primal-dual optimal solutions as approximate saddle points of the Lagrangian function. We present a variation of our subgradient method under the Slater constraint qualification and provide stronger estimates on the convergence rate of the generated primal sequences. In particular, we provide bounds on the amount of feasibility violation and on the primal objective function values at the approximate solutions. Our algorithm is particularly well-suited for problems where the subgradient of the dual function cannot be evaluated easily (equivalently, the minimum of the Lagrangian function at a dual solution cannot be computed efficiently), thus impeding the use of dual subgradient methods.
引用
收藏
页码:205 / 228
页数:23
相关论文
共 36 条
[1]  
Low S.(1999)Optimization flow control, I: Basic algorithm and convergence IEEE/ACM Trans. Netw. 7 861-874
[2]  
Lapsley D.E.(2007)Layering as optimization decomposition: a mathematical theory of network architectures Proc. IEEE 95 255-312
[3]  
Chiang M.(1977)On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space J. Math. Anal. Appl. 61 159-164
[4]  
Low S.H.(1978)Cezare convergence of gradient method approximation of saddle points for convex-concave functions Dokl. Akad. Nauk SSSR 239 1056-1059
[5]  
Calderbank A.R.(1974)A generalized gradient method for finding saddle points Matekon 10 36-52
[6]  
Doyle J.C.(1977)Gradient methods for finding saddle points Matekon 13 3-22
[7]  
Bruck R.E.(1988)A subgradient method for finding a saddle point of a convex-concave function Issled. Prikl. Mat. 15 6-12
[8]  
Nemirovski A.S.(1977)The extragradient method for finding saddle points and other problems Matekon 13 35-49
[9]  
Judin D.B.(1999)Large-scale convex optimization via saddle-point computation Oper. Res. 47 93-101
[10]  
Gol’shtein E.G.(2003)Mirror descent and nonlinear projected subgradient methods for convex optimization Oper. Res. Lett. 31 167-175