Convergence Rates for Stochastic Approximation: Biased Noise with Unbounded Variance, and Applications

被引:0
作者
Karandikar, Rajeeva Laxman [1 ]
Vidyasagar, Mathukumalli [2 ]
机构
[1] Chennai Math Inst, Chennai, India
[2] Indian Inst Technol Hyderabad, Hyderabad, India
关键词
Stochastic gradient descent; Stochastic approximation; Nonconvex optimization; Martingale methods;
D O I
10.1007/s10957-024-02547-7
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
In this paper, we study the convergence properties of the Stochastic Gradient Descent (SGD) method for finding a stationary point of a given objective function J(<middle dot>)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$J(\cdot )$$\end{document}. The objective function is not required to be convex. Rather, our results apply to a class of "invex" functions, which have the property that every stationary point is also a global minimizer. First, it is assumed that J(<middle dot>)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$J(\cdot )$$\end{document} satisfies a property that is slightly weaker than the Kurdyka-& Lstrok;ojasiewicz (KL) condition, denoted here as (KL'). It is shown that the iterations J(theta t)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$J({\varvec{\theta }}_t)$$\end{document} converge almost surely to the global minimum of J(<middle dot>)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$J(\cdot )$$\end{document}. Next, the hypothesis on J(<middle dot>)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$J(\cdot )$$\end{document} is strengthened from (KL') to the Polyak-& Lstrok;ojasiewicz (PL) condition. With this stronger hypothesis, we derive estimates on the rate of convergence of J(theta t)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$J({\varvec{\theta }}_t)$$\end{document} to its limit. Using these results, we show that for functions satisfying the PL property, the convergence rate of both the objective function and the norm of the gradient with SGD is the same as the best-possible rate for convex functions. While some results along these lines have been published in the past, our contributions contain two distinct improvements. First, the assumptions on the stochastic gradient are more general than elsewhere, and second, our convergence is almost sure, and not in expectation. We also study SGD when only function evaluations are permitted. In this setting, we determine the "optimal" increments or the size of the perturbations. Using the same set of ideas, we establish the global convergence of the Stochastic Approximation (SA) algorithm under more general assumptions on the measurement error, compared to the existing literature. We also derive bounds on the rate of convergence of the SA algorithm under appropriate assumptions.
引用
收藏
页码:2412 / 2450
页数:39
相关论文
共 57 条
  • [1] Lower bounds for non-convex stochastic optimization
    Arjevani, Yossi
    Carmon, Yair
    Duchi, John C.
    Foster, Dylan J.
    Srebro, Nathan
    Woodworth, Blake
    [J]. MATHEMATICAL PROGRAMMING, 2023, 199 (1-2) : 165 - 214
  • [2] Benveniste A., 1990, ADAPTIVE ALGORITHMS, DOI DOI 10.1007/978-3-642-75894-2
  • [3] Gradient convergence in gradient methods with errors
    Bertsekas, DP
    Tsitsiklis, JN
    [J]. SIAM JOURNAL ON OPTIMIZATION, 2000, 10 (03) : 627 - 642
  • [4] MULTIDIMENSIONAL STOCHASTIC APPROXIMATION METHODS
    BLUM, JR
    [J]. ANNALS OF MATHEMATICAL STATISTICS, 1954, 25 (04): : 737 - 744
  • [5] CHARACTERIZATIONS OF LOJASIEWICZ INEQUALITIES: SUBGRADIENT FLOWS, TALWEG, CONVEXITY
    Bolte, Jerome
    Daniilidis, Aris
    Ley, Olivier
    Mazet, Laurent
    [J]. TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY, 2010, 362 (06) : 3319 - 3363
  • [6] BORKAR V., 2022, Stochastic Approximation: A Dynamical Systems Viewpoint, V2nd
  • [7] Borkar V.S., 2023, ARXIV
  • [8] Asynchronous stochastic approximations
    Borkar, VS
    [J]. SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 1998, 36 (03) : 840 - 851
  • [9] The ODE method for convergence of stochastic approximation and reinforcement learning
    Borkar, VS
    Meyn, SP
    [J]. SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2000, 38 (02) : 447 - 469
  • [10] Optimization Methods for Large-Scale Machine Learning
    Bottou, Leon
    Curtis, Frank E.
    Nocedal, Jorge
    [J]. SIAM REVIEW, 2018, 60 (02) : 223 - 311