Minimax Optimization: The Case of Convex-Submodular

被引:0
作者
Adibi, Arman [1 ]
Mokhtari, Aryan [2 ]
Hassani, Hamed [1 ]
机构
[1] Univ Penn, Philadelphia, PA 19104 USA
[2] Univ Texas Austin, Austin, TX 78712 USA
来源
INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151 | 2022年 / 151卷
基金
美国国家科学基金会;
关键词
VARIATIONAL-INEQUALITIES; OPTIMISTIC GRADIENT; CONVERGENCE; MAXIMIZATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Minimax optimization has been central in addressing various applications in machine learning, game theory, and control theory. Prior literature has thus far mainly focused on studying such problems in the continuous domain, e.g., convex-concave minimax optimization is now understood to a significant extent. Nevertheless, minimax problems extend far beyond the continuous domain to mixed continuous-discrete domains or even fully discrete domains. In this paper, we study mixed continuous-discrete minimax problems where the minimization is over a continuous variable belonging to Euclidean space and the maximization is over subsets of a given ground set. We introduce the class of convex-submodular minimax problems, where the objective is convex with respect to the continuous variable and submodular with respect to the discrete variable. Even though such problems appear frequently in machine learning applications, little is known about how to address them from algorithmic and theoretical perspectives. For such problems, we first show that obtaining saddle points are hard up to any approximation, and thus introduce new notions of (near-) optimality. We then provide several algorithmic procedures for solving convex and monotone-submodular minimax problems and characterize their convergence rates, computational complexity, and quality of the final solution according to our notions of optimally. Our proposed algorithms are iterative and combine tools from both discrete and continuous optimization. Finally, we provide numerical experiments to showcase the effectiveness of our purposed methods.
引用
收藏
页数:25
相关论文
共 50 条
[31]   Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization [J].
Yamagishi, Masao ;
Yamada, Isao .
INVERSE PROBLEMS, 2017, 33 (04)
[32]   On the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimization [J].
Grapiglia, G. N. ;
Yuan, J. ;
Yuan, Y. .
OPTIMIZATION METHODS & SOFTWARE, 2016, 31 (03) :591-604
[33]   DUAL ALGORITHMS BASED ON THE PROXIMAL BUNDLE METHOD FOR SOLVING CONVEX MINIMAX FRACTIONAL PROGRAMS [J].
Boualam, Hssaine ;
Roubi, Ahmed .
JOURNAL OF INDUSTRIAL AND MANAGEMENT OPTIMIZATION, 2019, 15 (04) :1897-1920
[34]   No-regret algorithms in on-line learning, games and convex optimization [J].
Sorin, Sylvain .
MATHEMATICAL PROGRAMMING, 2024, 203 (1-2) :645-686
[35]   ON AN ERGODIC METHOD FOR A CONVEX OPTIMIZATION PROBLEM OVER THE FIXED POINT SET [J].
Iiduka, Hideaki .
PACIFIC JOURNAL OF OPTIMIZATION, 2010, 6 (01) :187-199
[36]   OPTIMIZATION OF CONVEX FUNCTIONS WITH RANDOM PURSUIT [J].
Stich, S. U. ;
Mueller, C. L. ;
Gaertner, B. .
SIAM JOURNAL ON OPTIMIZATION, 2013, 23 (02) :1284-1309
[37]   Non-Convex Distributed Optimization [J].
Tatarenko, Tatiana ;
Touri, Behrouz .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2017, 62 (08) :3744-3757
[38]   Biorthogonal Greedy Algorithms in convex optimization [J].
Dereventsov, A. V. ;
Temlyakov, V. N. .
APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS, 2022, 60 :489-511
[39]   ε-subgradient algorithms for bilevel convex optimization [J].
Helou, Elias S. ;
Simoes, Lucas E. A. .
INVERSE PROBLEMS, 2017, 33 (05)
[40]   Curiosities and counterexamples in smooth convex optimization [J].
Bolte, Jerome ;
Pauwels, Edouard .
MATHEMATICAL PROGRAMMING, 2022, 195 (1-2) :553-603