Convergence of approximate and incremental subgradient methods for convex optimization

被引:135
作者
Kiwiel, KC [1 ]
机构
[1] Polish Acad Sci, Syst Res Inst, PL-01447 Warsaw, Poland
关键词
nondifferentiable optimization; convex programming; subgradient optimization; approximate subgradients; efficiency;
D O I
10.1137/S1052623400376366
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
We present a unified convergence framework for approximate subgradient methods that covers various stepsize rules (including both diminishing and nonvanishing stepsizes), convergence in objective values, and convergence to a neighborhood of the optimal set. We discuss ways of ensuring the boundedness of the iterates and give efficiency estimates. Our results are extended to incremental subgradient methods for minimizing a sum of convex functions, which have recently been shown to be promising for various large-scale problems, including those arising from Lagrangian relaxation.
引用
收藏
页码:807 / 840
页数:34
相关论文
共 56 条