High Probability Convergence Bounds for Non-convex Stochastic Gradient Descent with Sub-Weibull Noise

被引:0
作者
Madden, Liam [1 ]
Dall'Anese, Emiliano [2 ]
Becker, Stephen [3 ]
机构
[1] Univ British Columbia, Dept Elect & Comp Engn, Vancouver, BC, Canada
[2] Boston Univ, Dept Elect & Comp Engn, Boston, MA USA
[3] Univ Colorado, Dept Appl Math, Boulder, CO USA
基金
美国国家科学基金会;
关键词
stochastic gradient descent; convergence bounds; sub-Weibull distributions; Polyak; Lojasiewicz inequality; Freedman inequality;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Stochastic gradient descent is one of the most common iterative algorithms used in machine learning and its convergence analysis is a rich area of research. Understanding its convergence properties can help inform what modifications of it to use in different settings. However, most theoretical results either assume convexity or only provide convergence results in mean. This paper, on the other hand, proves convergence bounds in high probability without assuming convexity. Assuming strong smoothness, we prove high probability convergence bounds in two settings: (1) assuming the Polyak-Lojasiewicz inequality and norm subGaussian gradient noise and (2) assuming norm sub-Weibull gradient noise. In the second setting, as an intermediate step to proving convergence, we prove a sub-Weibull martingale difference sequence self-normalized concentration inequality of independent interest. It extends Freedman-type concentration beyond the sub-exponential threshold to heavier-tailed martingale difference sequences. We also provide a post-processing method that picks a single iterate with a provable convergence guarantee as opposed to the usual bound for the unknown best iterate. Our convergence result for sub-Weibull noise extends the regime where stochastic gradient descent has equal or better convergence guarantees than stochastic gradient descent with modifications such as clipping, momentum, and normalization.
引用
收藏
页码:1 / 36
页数:36
相关论文
共 43 条
  • [1] Allen-Zhu Z, 2019, PR MACH LEARN RES, V97
  • [2] Allen -Zhu Zeyuan, 2019, Neural Information Processing Systems (NeurIPS), V32
  • [3] Lower bounds for non-convex stochastic optimization
    Arjevani, Yossi
    Carmon, Yair
    Duchi, John C.
    Foster, Dylan J.
    Srebro, Nathan
    Woodworth, Blake
    [J]. MATHEMATICAL PROGRAMMING, 2023, 199 (1-2) : 165 - 214
  • [4] Sharp concentration results for heavy-tailed distributions
    Bakhshizadeh, Milad
    Maleki, Arian
    De La Pena, Victor H.
    [J]. INFORMATION AND INFERENCE-A JOURNAL OF THE IMA, 2023, 12 (03)
  • [5] Beck A, 2017, MOS-SIAM SER OPTIMIZ, P1, DOI 10.1137/1.9781611974997
  • [6] Gradient convergence in gradient methods with errors
    Bertsekas, DP
    Tsitsiklis, JN
    [J]. SIAM JOURNAL ON OPTIMIZATION, 2000, 10 (03) : 627 - 642
  • [7] Billingsley Patrick, 1995, Wiley Series in Probability and Mathematical Statistics, V3
  • [8] Bottou Leon, 2008, Neural Information Processing Systems (NeurIPS), V20
  • [9] Charles Zachary, 2018, P MACHINE LEARNING R, V80
  • [10] Cutkosky Ashok, 2021, Advances in Neural Information Processing Systems, V34