Adaptive Stochastic Gradient Descent Method for Convex and Non-Convex Optimization

被引:4
作者
Chen, Ruijuan [1 ]
Tang, Xiaoquan [2 ]
Li, Xiuting [3 ]
机构
[1] Wuhan Text Univ, Res Ctr Nonlinear Sci, Sch Math & Phys Sci, Wuhan 430200, Peoples R China
[2] Chinese Univ Hong Kong, Sch Sci & Engn, Shenzhen 518172, Peoples R China
[3] Huazhong Agr Univ, Coll Sci, Wuhan 430070, Peoples R China
基金
中国国家自然科学基金;
关键词
stochastic gradient descent; adaptive step size; convex optimization; non-convex optimization; SUBGRADIENT METHODS; APPROXIMATION; CONVERGENCE;
D O I
10.3390/fractalfract6120709
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Stochastic gradient descent is the method of choice for solving large-scale optimization problems in machine learning. However, the question of how to effectively select the step-sizes in stochastic gradient descent methods is challenging, and can greatly influence the performance of stochastic gradient descent algorithms. In this paper, we propose a class of faster adaptive gradient descent methods, named AdaSGD, for solving both the convex and non-convex optimization problems. The novelty of this method is that it uses a new adaptive step size that depends on the expectation of the past stochastic gradient and its second moment, which makes it efficient and scalable for big data and high parameter dimensions. We show theoretically that the proposed AdaSGD algorithm has a convergence rate of O(1/T) in both convex and non-convex settings, where T is the maximum number of iterations. In addition, we extend the proposed AdaSGD to the case of momentum and obtain the same convergence rate for AdaSGD with momentum. To illustrate our theoretical results, several numerical experiments for solving problems arising in machine learning are made to verify the promise of the proposed method.
引用
收藏
页数:16
相关论文
共 38 条
[1]   Adaptive and self-confident on-line learning algorithms [J].
Auer, P ;
Cesa-Bianchi, N ;
Gentile, C .
JOURNAL OF COMPUTER AND SYSTEM SCIENCES, 2002, 64 (01) :48-75
[2]  
Bach F, 2014, J MACH LEARN RES, V15, P595
[3]  
Bottou L., 1998, On-Line Learning and Stochastic Approximations
[4]   ON A STOCHASTIC APPROXIMATION METHOD [J].
CHUNG, KL .
ANNALS OF MATHEMATICAL STATISTICS, 1954, 25 (03) :463-483
[5]  
Cutkosky A, 2018, ADV NEUR IN, V31
[6]  
Zeiler MD, 2012, Arxiv, DOI arXiv:1212.5701
[7]  
Duchi J, 2011, J MACH LEARN RES, V12, P2121
[8]   Parallel Selective Algorithms for Nonconvex Big Data Optimization [J].
Facchinei, Francisco ;
Scutari, Gesualdo ;
Sagratella, Simone .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2015, 63 (07) :1874-1889
[9]   STOCHASTIC FIRST- AND ZEROTH-ORDER METHODS FOR NONCONVEX STOCHASTIC PROGRAMMING [J].
Ghadimi, Saeed ;
Lan, Guanghui .
SIAM JOURNAL ON OPTIMIZATION, 2013, 23 (04) :2341-2368
[10]   GENETIC ALGORITHMS [J].
HOLLAND, JH .
SCIENTIFIC AMERICAN, 1992, 267 (01) :66-72