Variance-Reduced Decentralized Stochastic Optimization With Accelerated Convergence

被引:63
作者
Xin, Ran [1 ]
Khan, Usman A. [2 ]
Kar, Soummya [1 ]
机构
[1] Carnegie Mellon Univ, Elect & Comp Engn ECE Dept, Pittsburgh, PA 15213 USA
[2] Tufts Univ, ECE Dept, Medford, MA 02155 USA
基金
美国国家科学基金会;
关键词
Decentralized optimization; stochastic gradient methods; variance reduction; multi-agent systems; DISTRIBUTED OPTIMIZATION; STRATEGIES; ALGORITHMS; DIFFUSION;
D O I
10.1109/TSP.2020.3031071
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This paper describes a novel algorithmic framework to minimize a finite-sum of functions available over a network of nodes. The proposed framework, that we call GT-VR, is stochastic and decentralized, and thus is particularly suitable for problems where large-scale, potentially private data, cannot be collected or processed at a centralized server. The GT-VR framework leads to a family of algorithms with two key ingredients: (i) local variance reduction, that enables estimating the local batch gradients from arbitrarily drawn samples of local data; and, (ii) global gradient tracking, which fuses the gradient information across the nodes. Naturally, combining different variance reduction and gradient tracking techniques leads to different algorithms of interest with valuable practical tradeoffs and design considerations. Our focus in this paper is on two instantiations of the GT-VR framework, namely GT-SAGA and GT-SVRG, that, similar to their centralized counterparts (SAGA and SVRG), exhibit a compromise between space and time. We showthat both GT-SAGA and GT-SVRG achieve accelerated linear convergence for smooth and strongly convex problems and further describe the regimes in which they achieve non-asymptotic, network-independent linear convergence rates that are faster with respect to the existing decentralized first-order schemes. Moreover, we show that both algorithms achieve a linear speedup in such regimes compared to their centralized counterparts that process all data at a single node. Extensive simulations illustrate the convergence behavior of the corresponding algorithms.
引用
收藏
页码:6255 / 6271
页数:17
相关论文
共 51 条
[1]  
Alghunaim SA, 2019, ADV NEUR IN, V32
[2]  
[Anonymous], 1991, Probability with martingales
[3]  
[Anonymous], 2011, Acm T. Intel. Syst. Tec., DOI DOI 10.1145/1961189.1961199
[4]  
Assran Mahmoud, 2019, PMLR, V97, P344
[5]   Diffusion Adaptation Strategies for Distributed Optimization and Learning Over Networks [J].
Chen, Jianshu ;
Sayed, Ali H. .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2012, 60 (08) :4289-4305
[6]  
Defazio A, 2016, ADV NEUR IN, V29
[7]  
Defazio A, 2014, ADV NEUR IN, V27
[8]   NEXT: In-Network Nonconvex Optimization [J].
Di Lorenzo, Paolo ;
Scutari, Gesualdo .
IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, 2016, 2 (02) :120-136
[9]   Distributed Strategies for Generating Weight-Balanced and Doubly Stochastic Digraphs [J].
Gharesifard, Bahman ;
Cortes, Jorge .
EUROPEAN JOURNAL OF CONTROL, 2012, 18 (06) :539-557
[10]  
Hendrikx H, 2019, ADV NEUR IN, V32