Dynamic Stale Synchronous Parallel Distributed Training for Deep Learning

被引:46
作者
Zhao, Xing [1 ]
An, Aijun [1 ]
Liu, Junfeng [2 ]
Chen, Bao Xin [1 ]
机构
[1] York Univ, Dept Elect Engn & Comp Sci, Toronto, ON, Canada
[2] IBM Canada, Platform Comp, Markham, ON, Canada
来源
2019 39TH IEEE INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2019) | 2019年
基金
加拿大自然科学与工程研究理事会;
关键词
distributed deep learning; parameter server; BSP; ASP; SSP; GPU cluster;
D O I
10.1109/ICDCS.2019.00150
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning is a popular machine learning technique and has been applied to many real-world problems, ranging from computer vision to natural language processing. However, training a deep neural network is very time-consuming, especially on big data. It has become difficult for a single machine to train a large model over large datasets. A popular solution is to distribute and parallelize the training process across multiple machines using the parameter server framework. In this paper, we present a distributed paradigm on the parameter server framework called Dynamic Stale Synchronous Parallel (DSSP) which improves the state-of-the-art Stale Synchronous Parallel (SSP) paradigm by dynamically determining the staleness threshold at the run time. Conventionally to run distributed training in SSP, the user needs to specify a particular stalenes threshold as a hyper-parameter. However, a user does not usually know how to set the threshold and thus often finds a threshold value through trial and error, which is time-consuming. Based on workers' recent processing time, our approach DSSP adaptively adjusts the threshold per iteration at running time to reduce the waiting time of faster workers for synchronization of the globally shared parameters (the weights of the model), and consequently increases the frequency of parameters updates (increases iteration throughput), which speedups the convergence rate. We compare DSSP with other paradigms such as Bulk Synchronous Parallel (BSP), Asynchronous Parallel (ASP), and SSP by running deep neural networks (DNN) models over GPU clusters in both homogeneous and heterogeneous environments. The results show that in a heterogeneous environment where the cluster consists of mixed models of GPUs, DSSP converges to a higher accuracy much earlier than SSP and BSP and performs similarly to ASP. In a homogeneous distributed cluster, DSSP has more stable and slightly better performance than SSP and ASP, and converges much faster than BSP.
引用
收藏
页码:1507 / 1517
页数:11
相关论文
共 38 条
[11]  
Ho Q., 2013, ADV NEURAL INFORM PR, P1223
[12]  
Kang L., 2015, IMAGE PROCESSING ICI
[13]  
Kang L, 2015, IEEE IMAGE PROC, P2791, DOI 10.1109/ICIP.2015.7351311
[14]  
Krizhevsky A., 2009, Tech. Rep. TR-2009, P1
[15]  
Krizhevsky A., 2014, ONE WEIRD TRICK PARA, V1404, P5997
[16]   ImageNet Classification with Deep Convolutional Neural Networks [J].
Krizhevsky, Alex ;
Sutskever, Ilya ;
Hinton, Geoffrey E. .
COMMUNICATIONS OF THE ACM, 2017, 60 (06) :84-90
[17]  
Landset S., 2015, J BIG DATA, V2, P24, DOI [10.1186/s40537-015-0032-1, DOI 10.1186/S40537-015-0032-1]
[18]  
Lee S, 2014, ADV NEUR IN, V27
[19]  
Li M., SCALING DISTRIBUTED
[20]  
Li M., 2014, Advances in neural information processing systems (NeurIPS 2014), V27, P19