Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization

被引:4
作者
Khaled, Ahmed [1 ]
Sebbouh, Othmane [2 ]
Loizou, Nicolas [3 ]
Gower, Robert M. [4 ]
Richtarik, Peter [5 ]
机构
[1] Princeton Univ, Princeton, NJ 08544 USA
[2] ENS Paris, CREST ENSAE, Palaiseau, France
[3] Johns Hopkins Univ, Baltimore, MD USA
[4] Flatiron Inst, New York, NY USA
[5] KAUST, Thuwal, Saudi Arabia
关键词
Stochastic optimization; Convex optimization; Variance reduction; Composite optimization; DESCENT; SELECTION;
D O I
10.1007/s10957-023-02297-y
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
We present a unified theorem for the convergence analysis of stochastic gradient algorithms for minimizing a smooth and convex loss plus a convex regularizer. We do this by extending the unified analysis of Gorbunov et al. (in: AISTATS, 2020) and dropping the requirement that the loss function be strongly convex. Instead, we rely only on convexity of the loss function. Our unified analysis applies to a host of existing algorithms such as proximal SGD, variance reduced methods, quantization and some coordinate descent-type methods. For the variance reduced methods, we recover the best known convergence rates as special cases. For proximal SGD, the quantization and coordinate-type methods, we uncover new state-of-the-art convergence rates. Our analysis also includes any form of sampling or minibatching. As such, we are able to determine the minibatch size that optimizes the total complexity of variance reduced methods. We showcase this by obtaining a simple formula for the optimal minibatch size of two variance reduced methods (L-SVRG and SAGA). This optimal minibatch size not only improves the theoretical total complexity of the methods but also improves their convergence in practice, as we show in several experiments.
引用
收藏
页码:499 / 540
页数:42
相关论文
共 47 条
[1]  
Alistarh D, 2018, ADV NEUR IN, V31
[2]  
Alistarh D, 2017, ADV NEUR IN, V30
[3]  
Allen-Zhu Z, 2016, PR MACH LEARN RES, V48
[4]   Katyusha: The First Direct Acceleration of Stochastic Gradient Methods [J].
Allen-Zhu, Zeyuan .
STOC'17: PROCEEDINGS OF THE 49TH ANNUAL ACM SIGACT SYMPOSIUM ON THEORY OF COMPUTING, 2017, :1200-1205
[5]  
[Anonymous], 2018, Adv. Neural Inf. Process. Syst
[6]  
Atchadé YF, 2017, J MACH LEARN RES, V18, P1
[7]  
Beck A, 2017, MOS-SIAM SER OPTIMIZ, P1, DOI 10.1137/1.9781611974997
[8]   LIBSVM: A Library for Support Vector Machines [J].
Chang, Chih-Chung ;
Lin, Chih-Jen .
ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2011, 2 (03)
[9]  
Defazio A, 2014, ADV NEUR IN, V27
[10]  
Gazagnadou Nidham, 2019, PMLR, P2142