Preconditioned Stochastic Gradient Descent

被引:58
作者
Li, Xi-Lin [1 ,2 ,3 ]
机构
[1] Univ Maryland Baltimore Cty, Machine Learning Signal Proc Lab, Baltimore, MD 21228 USA
[2] Fortemedia Inc, Santa Clara, CA USA
[3] Cisco Syst Inc, San Jose, CA USA
关键词
Neural network; Newton method; nonconvex optimization; preconditioner; stochastic gradient descent (SGD);
D O I
10.1109/TNNLS.2017.2672978
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Stochastic gradient descent (SGD) still is the workhorse for many practical problems. However, it converges slow, and can be difficult to tune. It is possible to precondition SGD to accelerate its convergence remarkably. But many attempts in this direction either aim at solving specialized problems, or result in significantly more complicated methods than SGD. This paper proposes a new method to adaptively estimate a preconditioner, such that the amplitudes of perturbations of preconditioned stochastic gradient match that of the perturbations of parameters to be optimized in a way comparable to Newton method for deterministic optimization. Unlike the preconditioners based on secant equation fitting as done in deterministic quasi-Newton methods, which assume positive definite Hessian and approximate its inverse, the new preconditioner works equally well for both convex and nonconvex optimizations with exact or noisy gradients. When stochastic gradient is used, it can naturally damp the gradient noise to stabilize SGD. Efficient preconditioner estimation methods are developed, and with reasonable simplifications, they are applicable to large-scale problems. Experimental results demonstrate that equipped with the new preconditioner, without any tuning effort, preconditioned SGD can efficiently solve many challenging problems like the training of a deep neural network or a recurrent neural network requiring extremely long-term memories.
引用
收藏
页码:1454 / 1466
页数:13
相关论文
共 27 条
  • [1] Natural gradient works efficiently in learning
    Amari, S
    [J]. NEURAL COMPUTATION, 1998, 10 (02) : 251 - 276
  • [2] [Anonymous], P 28 INT C NEUR INF
  • [3] [Anonymous], 2013, 30 INT C MACH LEARN
  • [4] [Anonymous], Neural Networks for Machine Learning Lecture 6a Overview of mini-batch gradient descent Internet
  • [5] [Anonymous], 1985, Adaptive signal processing prentice-hall
  • [6] [Anonymous], IEEE T NEURAL NETW
  • [7] [Anonymous], 2014, SIAM J OPTIM
  • [8] [Anonymous], P INT C LEARN REPR
  • [9] [Anonymous], RECURRENT NEURAL NET
  • [10] [Anonymous], 2012, No More Pesky Learning Rates