A hybrid stochastic optimization framework for composite nonconvex optimization

被引:25
|
作者
Quoc Tran-Dinh [1 ]
Pham, Nhan H. [1 ]
Phan, Dzung T. [2 ]
Nguyen, Lam M. [2 ]
机构
[1] Univ N Carolina, Dept Stat & Operat Res, 318 Hanes Hall, Chapel Hill, NC 27599 USA
[2] IBM Res, Thomas J Watson Res Ctr, Yorktown Hts, NY 10598 USA
基金
美国国家科学基金会;
关键词
Hybrid stochastic estimator; Stochastic optimization algorithm; Oracle complexity; Variance reduction; Composite nonconvex optimization; GRADIENT DESCENT; APPROXIMATION; ALGORITHM;
D O I
10.1007/s10107-020-01583-1
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
We introduce a new approach to develop stochastic optimization algorithms for a class of stochastic composite and possibly nonconvex optimization problems. The main idea is to combine a variance-reduced estimator and an unbiased stochastic one to create a new hybrid estimator which trades-off the variance and bias, and possesses useful properties for developing new algorithms. We first introduce our hybrid estimator and investigate its fundamental properties to form a foundational theory for algorithmic development. Next, we apply our new estimator to develop several variants of stochastic gradient method to solve both expectation and finite-sum composite optimization problems. Our first algorithm can be viewed as a variant of proximal stochastic gradient methods with a single loop and single sample, but can achieve the best-known oracle complexity bound as state-of-the-art double-loop algorithms in the literature. Then, we consider two different variants of our method: adaptive step-size and restarting schemes that have similar theoretical guarantees as in our first algorithm. We also study two mini-batch variants of the proposed methods. In all cases, we achieve the best-known complexity bounds under standard assumptions. We test our algorithms on several numerical examples with real datasets and compare them with many existing methods. Our numerical experiments show that the new algorithms are comparable and, in many cases, outperform their competitors.
引用
收藏
页码:1005 / 1071
页数:67
相关论文
共 50 条
  • [21] Stochastic Gauss-Newton algorithm with STORM estimators for nonconvex composite optimization
    Wang, Zhaoxin
    Wen, Bo
    JOURNAL OF APPLIED MATHEMATICS AND COMPUTING, 2022, 68 (06) : 4621 - 4643
  • [22] A Hybrid Algorithm for Practical Nonconvex Optimization
    Hustig-Schultz, Dawn M.
    Sanfelice, Ricardo G.
    IFAC PAPERSONLINE, 2021, 54 (09): : 630 - 635
  • [23] Stochastic Nested Variance Reduction for Nonconvex Optimization
    Zhou, Dongruo
    Xu, Pan
    Gu, Quanquan
    JOURNAL OF MACHINE LEARNING RESEARCH, 2020, 21
  • [24] Stochastic subgradient algorithm for nonsmooth nonconvex optimization
    Yalcin, Gulcin Dinc
    JOURNAL OF APPLIED MATHEMATICS AND COMPUTING, 2024, 70 (01) : 317 - 334
  • [25] Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization
    Lian, Xiangru
    Huang, Yijun
    Li, Yuncheng
    Liu, Ji
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 28 (NIPS 2015), 2015, 28
  • [26] Solution of Nonconvex Nonsmooth Stochastic Optimization Problems
    Yu. M. Ermoliev
    V. I. Norkin
    Cybernetics and Systems Analysis, 2003, 39 (5) : 701 - 715
  • [27] Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
    Horvath, Samuel
    Lei, Lihua
    Richtarik, Peter
    Jordan, Michael I.
    SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE, 2022, 4 (02): : 634 - 648
  • [28] Stochastic Cubic Regularization for Fast Nonconvex Optimization
    Tripuraneni, Nilesh
    Stern, Mitchell
    Jin, Chi
    Regier, Jeffrey
    Jordan, Michael I.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [29] Stochastic subgradient algorithm for nonsmooth nonconvex optimization
    Gulcin Dinc Yalcin
    Journal of Applied Mathematics and Computing, 2024, 70 : 317 - 334
  • [30] Stochastic Nested Variance Reduction for Nonconvex Optimization
    Zhou, Dongruo
    Xu, Pan
    Gu, Quanquan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31