rTop-k: A Statistical Estimation Approach to Distributed SGD

被引:35
作者
Barnes, Leighton Pate [1 ]
Inan, Huseyin A. [1 ]
Isik, Berivan [1 ]
Ozgur, Ayfer [1 ]
机构
[1] Stanford Univ, Dept Elect Engn, Stanford, CA 94305 USA
来源
IEEE JOURNAL ON SELECTED AREAS IN INFORMATION THEORY | 2020年 / 1卷 / 03期
关键词
Distributed training; federated learning; stochastic gradient descent; statistical estimation; Fisher information; sparse Bernoulli model;
D O I
10.1109/JSAIT.2020.3042094
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The large communication cost for exchanging gradients between different nodes significantly limits the scalability of distributed training for large-scale learning models. Motivated by this observation, there has been significant recent interest in techniques that reduce the communication cost of distributed Stochastic Gradient Descent (SGD), with gradient sparsification techniques such as top-k and random-k shown to be particularly effective. The same observation has also motivated a separate line of work in distributed statistical estimation theory focusing on the impact of communication constraints on the estimation efficiency of different statistical models. The primary goal of this paper is to connect these two research lines and demonstrate how statistical estimation models and their analysis can lead to new insights in the design of communication-efficient training techniques. We propose a simple statistical estimation model for the stochastic gradients which captures the sparsity and skewness of their distribution. The statistically optimal communication scheme arising from the analysis of this model leads to a new sparsification technique for SGD, which concatenates random-k and top-k, considered separately in the prior literature. We show through extensive experiments on both image and language domains with CIFAR-10, ImageNet, and Penn Treebank datasets that the concatenated application of these two sparsification methods consistently and significantly outperforms either method applied alone.
引用
收藏
页码:897 / 907
页数:11
相关论文
共 51 条
[1]  
Acharya J, 2019, PR MACH LEARN RES, V99
[2]  
Alistarh D, 2018, ADV NEUR IN, V31
[3]  
Alistarh D, 2017, ADV NEUR IN, V30
[4]  
[Anonymous], 2011, Neural Information Processing Systems
[5]  
[Anonymous], 2011, Advances in Neural Information Processing Systems
[6]  
Ba J.L., 2016, P INT C LEARN REPR I
[7]  
Barnes LP, 2019, IEEE INT SYMP INFO, P2704, DOI [10.1109/ISIT.2019.8849821, 10.1109/isit.2019.8849821]
[8]  
Barnes LP, 2018, ANN ALLERTON CONF, P16, DOI 10.1109/ALLERTON.2018.8635899
[9]  
Basu D, 2019, ADV NEUR IN, V32
[10]  
Bernstein J, 2018, PR MACH LEARN RES, V80