Random Reshuffling: Simple Analysis with Vast Improvements

被引:0
|
作者
Mishchenko, Konstantin [1 ]
Khaled, Ahmed [2 ]
Richtarik, Peter [1 ]
机构
[1] KAUST, Thuwal, Saudi Arabia
[2] Cairo Univ, Cairo, Egypt
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020 | 2020年 / 33卷
关键词
CONVERGENCE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Random Reshuffling (RR) is an algorithm for minimizing finite-sum functions that utilizes iterative gradient descent steps in conjunction with data reshuffling. Often contrasted with its sibling Stochastic Gradient Descent (SGD), RR is usually faster in practice and enjoys significant popularity in convex and non-convex optimization. The convergence rate of RR has attracted substantial attention recently and, for strongly convex and smooth functions, it was shown to converge faster than SGD if 1) the stepsize is small, 2) the gradients are bounded, and 3) the number of epochs is large. We remove these 3 assumptions, improve the dependence on the condition number from k(2) to k (resp. from k to root k) and, in addition, show that RR has a different type of variance. We argue through theory and experiments that the new variance type gives an additional justification of the superior performance of RR. To go beyond strong convexity, we present several results for non-strongly convex and non-convex objectives. We show that in all cases, our theory improves upon existing literature. Finally, we prove fast convergence of the Shuffle-Once (SO) algorithm, which shuffles the data only once, at the beginning of the optimization process. Our theory for strongly convex objectives tightly matches the known lower bounds for both RR and SO and substantiates the common practical heuristic of shuffling once or only a few times. As a byproduct of our analysis, we also get new results for the Incremental Gradient algorithm (IG), which does not shuffle the data at all.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Random Reshuffling with Variance Reduction: New Analysis and Better Rates
    Malinovsky, Grigory
    Sailanbayev, Alibek
    Richtarik, Peter
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2023, 216 : 1347 - 1357
  • [2] Distributed Random Reshuffling Over Networks
    Huang, Kun
    Li, Xiao
    Milzarek, Andre
    Pu, Shi
    Qiu, Junwen
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2023, 71 : 1143 - 1158
  • [3] Variance-Reduced Stochastic Learning by Networked Agents Under Random Reshuffling
    Yuan, Kun
    Ying, Bicheng
    Liu, Jiageng
    Sayed, Ali H.
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2019, 67 (02) : 351 - 366
  • [4] Extremes of local times for simple random walks on symmetric trees
    Abe, Yoshihiro
    ELECTRONIC JOURNAL OF PROBABILITY, 2018, 23
  • [5] Scaling limit of random plane quadrangulations with a simple boundary, via restriction
    Bettinelli, Jeremie
    Curien, Nicolas
    Fredes, Luis
    Sepulveda, Avelio
    ANNALES DE L INSTITUT HENRI POINCARE-PROBABILITES ET STATISTIQUES, 2025, 61 (01): : 213 - 231
  • [6] A LIMIT LAW FOR THE MOST FAVORITE POINT OF SIMPLE RANDOM WALK ON A REGULAR TREE
    Biskup, Marek
    Louidor, Oren
    ANNALS OF PROBABILITY, 2024, 52 (02) : 502 - 544
  • [7] On the Multifractal Analysis of the Branching Random Walk in
    Attia, Najmeddine
    JOURNAL OF THEORETICAL PROBABILITY, 2014, 27 (04) : 1329 - 1349
  • [8] Macroscopic analysis of determinantal random balls
    Breton, Jean-Christophe
    Clarenne, Adrien
    Gobard, Renan
    BERNOULLI, 2019, 25 (02) : 1568 - 1601
  • [9] Preliminary Analysis of Simple Novelty Search
    Wiegand, R. Paul
    EVOLUTIONARY COMPUTATION, 2024, 32 (03) : 249 - 273
  • [10] Throwing Some Light on the Vast Darkness that is Analysis: Niels Henrik Abel's Critical Revision and the Concept of Absolute Convergence
    Sorensen, Henrik Kragh
    CENTAURUS, 2010, 52 (01) : 38 - 72