Efficient Mirror Descent Ascent Methods for Nonsmooth Minimax Problems

被引:0
|
作者
Huang, Feihu [1 ]
Wu, Xidong [1 ]
Huang, Heng [1 ]
机构
[1] Univ Pittsburgh, Dept Elect & Comp Engn, Pittsburgh, PA 15260 USA
关键词
GRADIENT;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the paper, we propose a class of efficient mirror descent ascent methods to solve the nonsmooth nonconvex-strongly-concave minimax problems by using dynamic mirror functions, and introduce a convergence analysis framework to conduct rigorous theoretical analysis for our mirror descent ascent methods. For our stochastic algorithms, we first prove that the mini-batch stochastic mirror descent ascent (SMDA) method obtains a gradient complexity of O(kappa(3) epsilon(-4)) for finding an epsilon-stationary point, where kappa denotes the condition number. Further, we propose an accelerated stochastic mirror descent ascent (VR-SMDA) method based on the variance reduced technique. We prove that our VR-SMDA method achieves a lower gradient complexity of O(kappa(3)epsilon(-3)). For our deterministic algorithm, we prove that our deterministic mirror descent ascent (MDA) achieves a lower gradient complexity of O(root kappa C-2) under mild conditions, which matches the best known complexity in solving smooth nonconvex-strongly-concave minimax optimization. We conduct the experiments on fair classifier and robust neural network training tasks to demonstrate the efficiency of our new algorithms.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Universal Gradient Descent Ascent Method for Nonconvex-Nonconcave Minimax Optimization
    Zheng, Taoli
    Zhu, Linglingzhi
    So, Anthony Man-Cho
    Blanchet, José
    Li, Jiajin
    arXiv, 2022,
  • [22] DESCENT METHODS FOR NONSMOOTH CONVEX CONSTRAINED MINIMIZATION
    KIWIEL, KC
    LECTURE NOTES IN ECONOMICS AND MATHEMATICAL SYSTEMS, 1985, 255 : 203 - 214
  • [23] Distributed Mirror Descent Algorithm With Bregman Damping for Nonsmooth Constrained Optimization
    Chen, Guanpu
    Xu, Gehui
    Li, Weijian
    Hong, Yiguang
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2023, 68 (11) : 6921 - 6928
  • [24] Accelerated Proximal Alternating Gradient-Descent-Ascent for Nonconvex Minimax Machine Learning
    Chen, Ziyi
    Ma, Shaocong
    Zhou, Yi
    2022 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY, ISIT, 2022, : 672 - 677
  • [25] An implicit gradient-descent procedure for minimax problems
    Essid, Montacer
    Tabak, Esteban G.
    Trigila, Giulio
    MATHEMATICAL METHODS OF OPERATIONS RESEARCH, 2023, 97 (01) : 57 - 89
  • [26] An implicit gradient-descent procedure for minimax problems
    Montacer Essid
    Esteban G. Tabak
    Giulio Trigila
    Mathematical Methods of Operations Research, 2023, 97 : 57 - 89
  • [27] Optimality and Duality for Nonsmooth Minimax Programming Problems Using Convexifactor
    Ahmad, I.
    Kummari, Krishna
    Singh, Vivek
    Jayswal, Anurag
    FILOMAT, 2017, 31 (14) : 4555 - 4570
  • [28] Penalty method for nonsmooth minimax control problems with interdependent variables
    Gorelik, V.A.
    Tarakanov, A.F.
    Cybernetics (English Translation of Kibernetika), 1990, 25 (04):
  • [29] Convergence of the Iterates in Mirror Descent Methods
    Doan, Thinh T.
    Bose, Subhonmesh
    Nguyen, D. Hoa
    Beck, Carolyn L.
    IEEE CONTROL SYSTEMS LETTERS, 2019, 3 (01): : 114 - 119
  • [30] Near-optimal Local Convergence of Alternating Gradient Descent-Ascent for Minimax Optimization
    Zhang, Guodong
    Wang, Yuanhao
    Lessard, Laurent
    Grosse, Roger
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151