An Accelerated Stochastic Mirror Descent Method

被引:0
|
作者
Jiang, Bo-Ou [1 ,2 ]
Yuan, Ya-Xiang [1 ]
机构
[1] Chinese Acad Sci, LSEC, ICMSEC, AMSS, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Dept Math, Beijing 100049, Peoples R China
基金
中国国家自然科学基金;
关键词
Large-scale optimization; Variance reduction; Mirror descent; Acceleration; Independent sampling; Importance sampling; THRESHOLDING ALGORITHM; OPTIMIZATION;
D O I
10.1007/s40305-023-00492-2
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
Driven by large-scale optimization problems arising from machine learning, the development of stochastic optimization methods has witnessed a huge growth. Numerous types of methods have been developed based on vanilla stochastic gradient descent method. However, for most algorithms, convergence rate in stochastic setting cannot simply match that in deterministic setting. Better understanding the gap between deterministic and stochastic optimization is the main goal of this paper. Specifically, we are interested in Nesterov acceleration of gradient-based approaches. In our study, we focus on acceleration of stochastic mirror descent method with implicit regularization property. Assuming that the problem objective is smooth and convex or strongly convex, our analysis prescribes the method parameters which ensure fast convergence of the estimation error and satisfied numerical performance.
引用
收藏
页码:549 / 571
页数:23
相关论文
共 50 条
  • [21] Efficiently Solving MDPs with Stochastic Mirror Descent
    Jin, Yujia
    Sidford, Aaron
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [22] An accelerated mirror descent algorithm for constrained nonconvex problems
    Gao, Xue
    Jiang, Yaning
    Cai, Xingju
    Wang, Kai
    OPTIMIZATION, 2025,
  • [23] Conditional Accelerated Lazy Stochastic Gradient Descent
    Lan, Guanghui
    Pokutta, Sebastian
    Zhou, Yi
    Zink, Daniel
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [24] Asynchronous Decentralized Accelerated Stochastic Gradient Descent
    Lan G.
    Zhou Y.
    Zhou, Yi (yi.zhou@ibm.com), 1600, Institute of Electrical and Electronics Engineers Inc. (02): : 802 - 811
  • [25] On early stopping of stochastic mirror descent method for ill-posed inverse problems
    Huang, Jing
    Jin, Qinian
    Lu, Xiliang
    Zhang, Liuying
    NUMERISCHE MATHEMATIK, 2025, : 539 - 571
  • [26] Stochastic mirror descent method for linear ill-posed problems in Banach spaces
    Jin, Qinian
    Lu, Xiliang
    Zhang, Liuying
    INVERSE PROBLEMS, 2023, 39 (06)
  • [27] A Stochastic Interpretation of Stochastic Mirror Descent: Risk-Sensitive Optimality
    Azizan, Navid
    Hassibi, Babak
    2019 IEEE 58TH CONFERENCE ON DECISION AND CONTROL (CDC), 2019, : 3960 - 3965
  • [28] Stochastic incremental mirror descent algorithms with Nesterov smoothing
    Bitterlich, Sandy
    Grad, Sorin-Mihai
    NUMERICAL ALGORITHMS, 2024, 95 (01) : 351 - 382
  • [29] ON THE CONVERGENCE OF MIRROR DESCENT BEYOND STOCHASTIC CONVEX PROGRAMMING
    Zhou, Zhengyuan
    Mertikopoulos, Panayotis
    Bambos, Nicholas
    Boyd, Stephen P.
    Glynn, Peter W.
    SIAM JOURNAL ON OPTIMIZATION, 2020, 30 (01) : 687 - 716
  • [30] A NEW APPROACH OF GPU-ACCELERATED STOCHASTIC GRADIENT DESCENT METHOD FOR MATRIX FACTORIZATION
    Li, Feng
    Ye, Yunming
    Li, Xutao
    Lu, Jiajie
    INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING INFORMATION AND CONTROL, 2019, 15 (02): : 697 - 711