Last-Iterate Convergence of Optimistic Gradient Method for Monotone Variational Inequalities

被引:0
|
作者
Gorbunov, Eduard [1 ,2 ,3 ,4 ]
Taylor, Adrien [5 ,6 ,7 ]
机构
[1] MIPT, Dolgoprudnyi, Russia
[2] Mila, Montreal, PQ, Canada
[3] UdeM, Montreal, PQ, Canada
[4] MBZUAI, Abu Dhabi, U Arab Emirates
[5] INRIA, CNRS, Paris, France
[6] Ecole Normale Super, CNRS, Paris, France
[7] PSL Res Univ, Paris, France
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022 | 2022年
基金
欧洲研究理事会;
关键词
WORST-CASE PERFORMANCE; 1ST-ORDER METHODS; CONVEX; OPTIMIZATION; SEARCH; TIGHT;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The Past Extragradient (PEG) [Popov, 1980] method, also known as the Optimistic Gradient method, has known a recent gain in interest in the optimization community with the emergence of variational inequality formulations for machine learning. Recently, in the unconstrained case, Golowich et al. [2020a] proved that a O(1/N) last-iterate convergence rate in terms of the squared norm of the operator can be achieved for Lipschitz and monotone operators with a Lipschitz Jacobian. In this work, by introducing a novel analysis through potential functions, we show that (i) this O(1/N) last-iterate convergence can be achieved without any assumption on the Jacobian of the operator, and (ii) it can be extended to the constrained case, which was not derived before even under Lipschitzness of the Jacobian. The proof is significantly different from the one known from Golowich et al. [2020a], and its discovery was computer-aided. Those results close the open question of the last iterate convergence of PEG for monotone variational inequalities.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Extragradient Method: O(1/K) Last-Iterate Convergence for Monotone Variational Inequalities and Connections With Cocoercivity
    Gorbunov, Eduard
    Loizou, Nicolas
    Gidel, Gauthier
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151 : 366 - 402
  • [2] Last-iterate Convergence in Extensive-Form Games
    Lee, Chung-Wei
    Kroer, Christian
    Luo, Haipeng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [3] Last-iterate Convergence Separation between Extra-gradient and Optimisim in Constrained Periodic Games
    Feng, Yi
    Li, Ping
    Panageas, Ioannis
    Wang, Xiao
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2024, 244 : 1339 - 1370
  • [4] On Last-Iterate Convergence Beyond Zero-Sum Games
    Anagnostides, Ioannis
    Panageas, Ioannis
    Farina, Gabriele
    Sandholm, Tuomas
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022, : 536 - 581
  • [5] Last-Iterate Convergence Rates for Min-Max Optimization: Convergence of Hamiltonian Gradient Descent and Consensus Optimization
    Abernethy, Jacob
    Lai, Kevin A.
    Wibisono, Andre
    ALGORITHMIC LEARNING THEORY, VOL 132, 2021, 132
  • [6] Linear Last-Iterate Convergence for Continuous Games with Coupled Inequality Constraints
    Meng, Min
    Li, Xiuxian
    2023 62ND IEEE CONFERENCE ON DECISION AND CONTROL, CDC, 2023, : 1076 - 1081
  • [7] Last-iterate convergence analysis of stochastic momentum methods for neural networks
    Liu, Jinlan
    Xu, Dongpo
    Lu, Yinghua
    Kong, Jun
    Mandic, Danilo P.
    NEUROCOMPUTING, 2023, 527 : 27 - 35
  • [8] A Modified Projected Gradient Method for Monotone Variational Inequalities
    Jun Yang
    Hongwei Liu
    Journal of Optimization Theory and Applications, 2018, 179 : 197 - 211
  • [9] A Modified Projected Gradient Method for Monotone Variational Inequalities
    Yang, Jun
    Liu, Hongwei
    JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2018, 179 (01) : 197 - 211
  • [10] Revisit last-iterate convergence of mSGD under milder requirement on step size
    Jin, Ruinan
    He, Xingkang
    Chen, Lang
    Cheng, Difei
    Gupta, Vijay
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,