Stochastic Gradient Descent with Noise of Machine Learning Type Part I: Discrete Time Analysis

被引:0
作者
Wojtowytsch, Stephan [1 ]
机构
[1] Texas A&M Univ, Dept Math, 155 Ireland St, College Stn, TX 77840 USA
关键词
Stochastic gradient descent; Almost sure convergence; Lojasiewicz inequality; Non-convex optimization; Machine learning; deep learning; Overparametrization; Global minimum selection;
D O I
10.1007/s00332-023-09903-3
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Stochastic gradient descent (SGD) is one of the most popular algorithms in modern machine learning. The noise encountered in these applications is different from that in many theoretical analyses of stochastic gradient algorithms. In this article, we discuss some of the common properties of energy landscapes and stochastic noise encountered in machine learning problems, and how they affect SGD-based optimization. In particular, we show that the learning rate in SGD with machine learning noise can be chosen to be small, but uniformly positive for all times if the energy landscape resembles that of overparametrized deep learning problems. If the objective function satisfies a Lojasiewicz inequality, SGD converges to the global minimum exponentially fast, and even for functions which may have local minima, we establish almost sure convergence to the global minimum at an exponential rate from any finite energy initialization. The assumptions that we make in this result concern the behavior where the objective function is either small or large and the nature of the gradient noise, but the energy landscape is fairly unconstrained on the domain where the objective function takes values in an intermediate regime.
引用
收藏
页数:52
相关论文
共 31 条
[1]  
Allen-Zhu Z, 2018, Arxiv, DOI arXiv:1708.08694
[2]  
Bach F, 2013, Arxiv, DOI arXiv:1306.2119
[3]  
Bassily R, 2018, Arxiv, DOI arXiv:1811.02564
[4]  
Bernstein J., 2018, PMLR, P560
[5]   Optimization Methods for Large-Scale Machine Learning [J].
Bottou, Leon ;
Curtis, Frank E. ;
Nocedal, Jorge .
SIAM REVIEW, 2018, 60 (02) :223-311
[6]  
Cooper Y, 2018, Arxiv, DOI arXiv:1804.10200
[7]  
Defossez A., 2020, arXiv
[8]  
Dereich S, 2024, Arxiv, DOI arXiv:2102.09385
[9]  
Dieuleveut Aymeric, 2017, Bridging the gap between constant step size stochastic gradient descent and markov chains
[10]  
Fehrman B, 2020, J MACH LEARN RES, V21