Dynamic Stale Synchronous Parallel Distributed Training for Deep Learning

被引:46
作者
Zhao, Xing [1 ]
An, Aijun [1 ]
Liu, Junfeng [2 ]
Chen, Bao Xin [1 ]
机构
[1] York Univ, Dept Elect Engn & Comp Sci, Toronto, ON, Canada
[2] IBM Canada, Platform Comp, Markham, ON, Canada
来源
2019 39TH IEEE INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2019) | 2019年
基金
加拿大自然科学与工程研究理事会;
关键词
distributed deep learning; parameter server; BSP; ASP; SSP; GPU cluster;
D O I
10.1109/ICDCS.2019.00150
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning is a popular machine learning technique and has been applied to many real-world problems, ranging from computer vision to natural language processing. However, training a deep neural network is very time-consuming, especially on big data. It has become difficult for a single machine to train a large model over large datasets. A popular solution is to distribute and parallelize the training process across multiple machines using the parameter server framework. In this paper, we present a distributed paradigm on the parameter server framework called Dynamic Stale Synchronous Parallel (DSSP) which improves the state-of-the-art Stale Synchronous Parallel (SSP) paradigm by dynamically determining the staleness threshold at the run time. Conventionally to run distributed training in SSP, the user needs to specify a particular stalenes threshold as a hyper-parameter. However, a user does not usually know how to set the threshold and thus often finds a threshold value through trial and error, which is time-consuming. Based on workers' recent processing time, our approach DSSP adaptively adjusts the threshold per iteration at running time to reduce the waiting time of faster workers for synchronization of the globally shared parameters (the weights of the model), and consequently increases the frequency of parameters updates (increases iteration throughput), which speedups the convergence rate. We compare DSSP with other paradigms such as Bulk Synchronous Parallel (BSP), Asynchronous Parallel (ASP), and SSP by running deep neural networks (DNN) models over GPU clusters in both homogeneous and heterogeneous environments. The results show that in a heterogeneous environment where the cluster consists of mixed models of GPUs, DSSP converges to a higher accuracy much earlier than SSP and BSP and performs similarly to ASP. In a homogeneous distributed cluster, DSSP has more stable and slightly better performance than SSP and ASP, and converges much faster than BSP.
引用
收藏
页码:1507 / 1517
页数:11
相关论文
共 38 条
[1]  
[Anonymous], 2016, ARXIV PREPRINT ARXIV
[2]  
[Anonymous], 2016, CoRR abs/1512.00567, DOI DOI 10.1109/CVPR.2016.308
[3]   High Prevalence of Assisted Injection Among Street-Involved Youth in a Canadian Setting [J].
Cheng, Tessa ;
Kerr, Thomas ;
Small, Will ;
Dong, Huiru ;
Montaner, Julio ;
Wood, Evan ;
DeBeck, Kora .
AIDS AND BEHAVIOR, 2016, 20 (02) :377-384
[4]  
Chenoweth JM, 2016, FLA MUS NAT HIST-RIP, P1
[5]  
Chilimbi T., 2014, 11 USENIX S OP SYST, P571, DOI DOI 10.1108/01439911111122716
[6]  
Cui H., 2014, 2014 USENIX ANN TECH, P37
[7]  
Dai W., 2015, HIGH PERFORMANCE DIS
[8]  
Dean J, 2012, Advances in Neural Information Processing Systems, V25, P1
[9]   DIRECT BULK-SYNCHRONOUS PARALLEL ALGORITHMS [J].
GERBESSIOTIS, AV ;
VALIANT, LG .
JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 1994, 22 (02) :251-267
[10]  
He K., 2016, CVPR, DOI [10.1109/CVPR.2016.90, DOI 10.1109/CVPR.2016.90]