First-Order Algorithms for Robust Optimization Problems via Convex-Concave Saddle-Point Lagrangian Reformulation

被引:0
|
作者
Postek, Krzysztof [1 ]
Shtern, Shimrit [1 ]
机构
[1] Technion Israel Inst Technol, Fac Data & Decis Sci, IL-3200003 Haifa, Israel
基金
以色列科学基金会;
关键词
convergence analysis; first order methods; robust optimization; saddle point; VARIATIONAL-INEQUALITIES;
D O I
10.1287/ijoc.2022.0200
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Robust optimization (RO) is one of the key paradigms for solving optimization problems affected by uncertainty. Two principal approaches for RO, the robust counterpart method and the adversarial approach, potentially lead to excessively large optimization problems. For that reason, first-order approaches, based on online convex optimization, have been proposed as alternatives for the case of large-scale problems. However, existing first-order methods are either stochastic in nature or involve a binary search for the optimal value. We show that this problem can also be solved with deterministic first-order algorithms based on a saddle-point Lagrangian reformulation that avoid both of these issues. Our approach recovers the other approaches' O(1 / epsilon(2)) convergence rate in the general case and offers an improved O(1 / epsilon) rate for problems with constraints that are affine both in the decision and in the uncertainty. Experiment involving robust quadratic optimization demonstrates the numerical benefits of our approach.
引用
收藏
页数:26
相关论文
共 50 条