IQN: AN INCREMENTAL QUASI-NEWTON METHOD WITH LOCAL SUPERLINEAR CONVERGENCE RATE

被引:47
作者
Mokhtari, Aryan [1 ]
Eisen, Mark [1 ]
Ribeiro, Alejandro [1 ]
机构
[1] Univ Penn, Dept Elect & Syst Engn, Philadelphia, PA 19104 USA
关键词
large-scale optimization; stochastic optimization; quasi-Newton methods; incremental methods; superlinear convergence; OPTIMIZATION;
D O I
10.1137/17M1122943
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
The problem of minimizing an objective that can be written as the sum of a set of n smooth and strongly convex functions is challenging because the cost of evaluating the function and its derivatives is proportional to the number of elements in the sum. The incremental quasi-Newton (IQN) method proposed here belongs to the family of stochastic and incremental methods that have a cost per iteration independent of n. IQN iterations are a stochastic version of BFGS iterations that use memory to reduce the variance of stochastic approximations. The method is shown to exhibit local superlinear convergence. The convergence properties of IQN bridge a gap between deterministic and stochastic quasi-Newton methods. Deterministic quasi-Newton methods exploit the possibility of approximating the Newton step using objective gradient differences. They are appealing because they have a smaller computational cost per iteration relative to Newton's method and achieve a superlinear convergence rate under customary regularity assumptions. Stochastic quasi-Newton methods utilize stochastic gradient differences in lieu of actual gradient differences. This makes their computational cost per iteration independent of the number of objective functions n. However, existing stochastic quasi-Newton methods have sublinear or linear convergence at best. IQN is the first stochastic quasi-Newton method proven to converge superlinearly in a local neighborhood of the optimal solution. IQN differs from state-of-the-art incremental quasi-Newton methods in three aspects: (i) The use of aggregated information of variables, gradients, and quasi-Newton Hessian approximation matrices to reduce the noise of gradient and Hessian approximations. (ii) The approximation of each individual function by its Taylor's expansion, in which the linear and quadratic terms are evaluated with respect to the same iterate. (iii) The use of a cyclic scheme to update the functions in lieu of a random selection routine. We use these fundamental properties of IQN to establish its local superlinear convergence rate. The presented numerical experiments match our theoretical results and justify the advantage of IQN relative to other incremental methods.
引用
收藏
页码:1670 / 1698
页数:29
相关论文
共 35 条
[1]  
[Anonymous], 2013, Adv. Neural Inf. Process. Syst.
[2]  
[Anonymous], ARXIV161100347
[3]  
[Anonymous], 2007, P MATH LEARN RES PML
[4]  
[Anonymous], ARXIV160309649
[5]  
[Anonymous], SOME GLOBAL CONVERGE
[6]  
[Anonymous], 2008, P 25 INT C MACHINE L, DOI [10.1145/1390156.1390273, DOI 10.1145/1390156.1390273]
[7]  
[Anonymous], SPRINGER SCI BUS MED
[8]  
[Anonymous], 2012, P 25 INT C NEUR INF
[9]  
[Anonymous], 2015, ARXIV150602081
[10]  
[Anonymous], 2015, A variance reduced stochastic Newton method