Accelerating Distributed Learning in Non-Dedicated Environments

被引:5
作者
Chen, Chen [1 ]
Weng, Qizhen [1 ]
Wang, Wei [1 ]
Li, Baochun [2 ]
Li, Bo [1 ]
机构
[1] Hong Kong Sci & Technol, Dept Comp Sci & Engn, Hong Kong, Peoples R China
[2] Univ Toronto, Dept Elect & Comp Engn, Toronto, ON M5S, Canada
关键词
Training; Load management; Load modeling; Synchronization; Computational modeling; Hardware; Graphics processing units; Distributed machine learning; synchronization; load balancing; federated learning; NEURAL-NETWORK; GRADIENT DESCENT; MODEL; PREDICTION;
D O I
10.1109/TCC.2021.3102593
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning (ML) models are increasingly trained with distributed workers possessing heterogeneous resources. In such scenarios, model training efficiency may be negatively affected by stragglers-workers that run much slower than others. Efficient model training requires eliminating such stragglers, yet for modern ML workloads, existing load balancing strategies are inefficient and even infeasible. In this article, we propose a novel strategy, called semi-dynamic load balancing, to eliminate stragglers of distributed ML workloads. The key insight is that ML workers shall be load-balanced at iteration boundaries, being non-intrusive to intra-iteration execution. Based on it we further develop LB-BSP, an integrated worker coordination mechanism that adapts workers' load to their instantaneous processing capabilities-by right-sizing the sample batches at the synchronization barriers. We have designed distinct load tuning algorithms for ML in CPU clusters, in GPU clusters as well as in federated learning setups, based on their respective characteristics. LB-BSP has been implemented as a Python module for ML frameworks like TensorFlow and PyTorch. Our EC2 deployment confirms that LB-BSP is practical, effective and light-weight, and is able to accelerating distributed training by up to 54 percent.
引用
收藏
页码:515 / 531
页数:17
相关论文
共 104 条
[1]  
Abadi M., 2016, P OSDI, P265
[2]   Scheduling Parallel Programs by Work Stealing with Private Deques [J].
Acar, Umut A. ;
Chargueraud, Arthur ;
Rainey, Mike .
ACM SIGPLAN NOTICES, 2013, 48 (08) :219-228
[3]   Mitigating Processor Variation through Dynamic Load Balancing [J].
Acun, Bilge ;
Kale, Laxmikant V. .
2016 IEEE 30TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS (IPDPSW), 2016, :1073-1076
[4]   A Survey of Load Balancing in Cloud Computing: Challenges and Algorithms [J].
Al Nuaimi, Klaithem ;
Mohamed, Nader ;
Al Nuaimi, Mariam ;
Al-Jaroodi, Jameela .
2012 IEEE SECOND SYMPOSIUM ON NETWORK CLOUD COMPUTING AND APPLICATIONS (NCCA 2012), 2012, :137-142
[5]  
Alistarh D, 2017, ADV NEUR IN, V30
[6]  
amazon, 2020, EC2 SPOT INST
[7]  
Ananthanarayanan Ganesh, 2013, Proceedings of NSDI '13: 10th USENIX Symposium on Networked Systems Design and Implementation. NSDI '13, P185
[8]  
[Anonymous], 2019, STRESS NG TOOL LOAD
[9]  
[Anonymous], 2011, Int. J. Comput. Appl.
[10]  
[Anonymous], 2009, Advances in Neural Information Processing Systems