Stochastic modified equations for the asynchronous stochastic gradient descent

被引:9
作者
An, Jing [1 ]
Lu, Jianfeng [2 ,3 ]
Ying, Lexing [4 ,5 ]
机构
[1] Stanford Univ, Inst Computat & Math Engn, Stanford, CA 94305 USA
[2] Duke Univ, Dept Math, Dept Chem, Box 90320, Durham, NC 27706 USA
[3] Duke Univ, Dept Phys, Box 90320, Durham, NC 27706 USA
[4] Stanford Univ, Dept Math, Stanford, CA 94305 USA
[5] Stanford Univ, Inst Computat & Math Engn ICME, Stanford, CA 94305 USA
基金
美国国家科学基金会;
关键词
stochastic modified equations; asynchronous stochastic gradient descent; optimal control;
D O I
10.1093/imaiai/iaz030
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
We propose stochastic modified equations (SMEs) for modelling the asynchronous stochastic gradient descent (ASGD) algorithms. The resulting SME of Langevin type extracts more information about the ASGD dynamics and elucidates the relationship between different types of stochastic gradient algorithms. We show the convergence of ASGD to the SME in the continuous time limit, as well as the SME's precise prediction to the trajectories of ASGD with various forcing terms. As an application, we propose an optimal mini-batching strategy for ASGD via solving the optimal control problem of the associated SME.
引用
收藏
页码:851 / 873
页数:23
相关论文
共 25 条
  • [1] [Anonymous], 2015, INT C LEARN REPR AMS
  • [2] [Anonymous], 2011, Advances in NeurIPS
  • [3] [Anonymous], 2017, P IEEE INT C COMPUTE
  • [4] Bernstein S, 1929, ACTA MATH-DJURSHOLM, V52, P1
  • [5] Optimization Methods for Large-Scale Machine Learning
    Bottou, Leon
    Curtis, Frank E.
    Nocedal, Jorge
    [J]. SIAM REVIEW, 2018, 60 (02) : 223 - 311
  • [6] Dekel O, 2012, J MACH LEARN RES, V13, P165
  • [7] Duchi J., 2013, Advances in Neural Information Processing Systems, P2832
  • [8] Duchi J, 2011, J MACH LEARN RES, V12, P2121
  • [9] Gimpel Kevin, 2010, P 14 C COMP NAT LANG, P213
  • [10] Hardt M, 2016, PR MACH LEARN RES, V48