Optimistic Dual Extrapolation for Coherent Non-monotone Variational Inequalities

被引:0
作者
Song, Chaobing [1 ]
Zhou, Zhengyuan [1 ,3 ]
Zhou, Yichao [2 ]
Jiang, Yong [1 ]
Ma, Yi
机构
[1] Tsinghua Univ, Tsinghua Berkeley Shenzhen Inst, Beijing, Peoples R China
[2] Univ Calif Berkeley, Dept EECS, Berkeley, CA USA
[3] NYU, Stern Sch Business, New York, NY USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020 | 2020年 / 33卷
关键词
MONOTONE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The optimization problems associated with training generative adversarial neural networks can be largely reduced to certain non-monotone variational inequality problems (VIPs), whereas existing convergence results are mostly based on monotone or strongly monotone assumptions. In this paper, we propose optimistic dual extrapolation (OptDE), a method that only performs one gradient evaluation per iteration. We show that OptDE is provably convergent to a strong solution under different coherent non-monotone assumptions. In particular, when a weak solution exists, the convergence rate of our method is O(1/epsilon(2)), which matches the best existing result of the methods with two gradient evaluations. Further, when a sigma-weak solution exists, the convergence guarantee is improved to the linear rate O(log 1/epsilon). Along the way-as a byproduct of our inquiries into non-monotone variational inequalities-we provide the near-optimal O(1/epsilon log 1/epsilon). " convergence guarantee in terms of restricted strong merit function for monotone variational inequalities. We also show how our results can be naturally generalized to the stochastic setting, and obtain corresponding new convergence results. Taken together, our results contribute to the broad landscape of variational inequality-both non-monotone and monotone alike-by providing a novel and more practical algorithm with the state-of-the-art convergence guarantees.
引用
收藏
页数:12
相关论文
共 44 条
[1]  
[Anonymous], 2018, ARXIV180802901
[2]   SHARP UNIFORM CONVEXITY AND SMOOTHNESS INEQUALITIES FOR TRACE NORMS [J].
BALL, K ;
CARLEN, EA ;
LIEB, EH .
INVENTIONES MATHEMATICAE, 1994, 115 (03) :463-482
[3]  
BRIGHI L, 2002, J STAT MANAG SYST, V5, P253
[4]   A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging [J].
Chambolle, Antonin ;
Pock, Thomas .
JOURNAL OF MATHEMATICAL IMAGING AND VISION, 2011, 40 (01) :120-145
[5]  
Chiang Chao-Kai, 2012, C LEARN THEOR, P6
[6]   PRODUCT POSITIONING UNDER PRICE-COMPETITION [J].
CHOI, SC ;
DESARBO, WS ;
HARKER, PT .
MANAGEMENT SCIENCE, 1990, 36 (02) :175-199
[7]  
Cui SS, 2016, IEEE DECIS CONTR P, P4510, DOI 10.1109/CDC.2016.7798955
[8]   On the convergence properties of non-Euclidean extragradient methods for variational inequalities with generalized monotone operators [J].
Dang, Cong D. ;
Lan, Guanghui .
COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2015, 60 (02) :277-310
[9]  
Daskalakis C., 2018, INT C LEARNING REPRE
[10]  
Diakonikolas, 2020, PROC MACH LEARN RES, V125, P1