Random Reshuffling: Simple Analysis with Vast Improvements

被引:0
作者
Mishchenko, Konstantin [1 ]
Khaled, Ahmed [2 ]
Richtarik, Peter [1 ]
机构
[1] KAUST, Thuwal, Saudi Arabia
[2] Cairo Univ, Cairo, Egypt
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020 | 2020年 / 33卷
关键词
CONVERGENCE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Random Reshuffling (RR) is an algorithm for minimizing finite-sum functions that utilizes iterative gradient descent steps in conjunction with data reshuffling. Often contrasted with its sibling Stochastic Gradient Descent (SGD), RR is usually faster in practice and enjoys significant popularity in convex and non-convex optimization. The convergence rate of RR has attracted substantial attention recently and, for strongly convex and smooth functions, it was shown to converge faster than SGD if 1) the stepsize is small, 2) the gradients are bounded, and 3) the number of epochs is large. We remove these 3 assumptions, improve the dependence on the condition number from k(2) to k (resp. from k to root k) and, in addition, show that RR has a different type of variance. We argue through theory and experiments that the new variance type gives an additional justification of the superior performance of RR. To go beyond strong convexity, we present several results for non-strongly convex and non-convex objectives. We show that in all cases, our theory improves upon existing literature. Finally, we prove fast convergence of the Shuffle-Once (SO) algorithm, which shuffles the data only once, at the beginning of the optimization process. Our theory for strongly convex objectives tightly matches the known lower bounds for both RR and SO and substantiates the common practical heuristic of shuffling once or only a few times. As a byproduct of our analysis, we also get new results for the Incremental Gradient algorithm (IG), which does not shuffle the data at all.
引用
收藏
页数:12
相关论文
共 50 条
[41]   Asymptotic analysis of random boundary layers between two incompressible viscous fluid flows [J].
Alain Brillard ;
Mustapha El Jarroudi .
Zeitschrift für angewandte Mathematik und Physik, 2015, 66 :3357-3376
[42]   Asymptotic Theorems for the Product of Certain Structured Random Matrices and Their Application to Analysis of Asynchronous CDMA [J].
Hwang, Chien-Hwa .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2009, 55 (08) :3670-3700
[43]   Strong Consistency of Least-Squares Estimators in the Simple Linear Errors-in-Variables Regression Model with Widely Orthant Dependent Random Variables [J].
Ding, Liwang ;
Jiang, Caoqing .
MATHEMATICA SLOVACA, 2023, 73 (03) :811-824
[44]   Analysis for optimizer based on spiking-neural oscillator networks with a simple network topology [J].
Sasaki, Tomoyuki ;
Nakano, Hidehiro .
2024 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS 2024, 2024,
[45]   A simple explicit-implicit finite element tearing and interconnecting transient analysis algorithm [J].
Gonzalez, Jose A. ;
Park, K. C. .
INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, 2012, 89 (10) :1203-1226
[46]   LOCAL SENSITIVITY ANALYSIS FOR THE KURAMOTO-DAIDO MODEL WITH RANDOM INPUTS IN A LARGE COUPLING REGIME [J].
Ha, Seung-Yeal ;
Jin, Shi ;
Jung, Jinwook .
SIAM JOURNAL ON MATHEMATICAL ANALYSIS, 2020, 52 (02) :2000-2040
[47]   On the generalized logistic random differential equation: Theoretical analysis and numerical simulations with real-world data [J].
Bevia, V ;
Calatayud, J. ;
Cortes, J-C ;
Jornet, M. .
COMMUNICATIONS IN NONLINEAR SCIENCE AND NUMERICAL SIMULATION, 2023, 116
[48]   Convergence analysis of a non-negative matrix factorization algorithm based on Gibbs random field modeling [J].
Yang C. ;
Ye M. ;
Liu Z. .
Liu, Z. (hbliuzijian@126.com), 1600, Springer Verlag (42) :491-508
[49]   RIGOROUS ACCURACY AND ROBUSTNESS ANALYSIS FOR TWO-SCALE REDUCED RANDOM KALMAN FILTERS IN HIGH DIMENSIONS [J].
Majda, Andrew J. ;
Tong, Xin T. .
COMMUNICATIONS IN MATHEMATICAL SCIENCES, 2018, 16 (04) :1095-1132
[50]   Random Pruning Over-parameterized Neural Networks Can Improve Generalization: A Training Dynamics Analysis [J].
Yang, Hongru ;
Liang, Yingbin ;
Guo, Xiaojie ;
Wu, Lingfei ;
Wang, Zhangyang .
JOURNAL OF MACHINE LEARNING RESEARCH, 2025, 26 :1-51