A Distributed Synchronous SGD Algorithm with Global Top-k Sparsification for Low Bandwidth Networks

被引:101
|
作者
Shi, Shaohuai [1 ]
Wang, Qiang [1 ]
Zhao, Kaiyong [1 ]
Tang, Zhenheng [1 ]
Wang, Yuxin [1 ]
Huang, Xiang [2 ]
Chu, Xiaowen [1 ]
机构
[1] Hong Kong Baptist Univ, Dept Comp Sci, Hong Kong, Peoples R China
[2] Shenzhen Dist Block Technol Co Ltd, MassGrid Com, Shenzhen, Peoples R China
来源
2019 39TH IEEE INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2019) | 2019年
关键词
Deep Learning; Stochastic Gradient Descent; Distributed SGD; Gradient Communication; Top-k Sparsification; gTop-k;
D O I
10.1109/ICDCS.2019.00220
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Distributed synchronous stochastic gradient descent (S-SGD) with data parallelism has been widely used in training large-scale deep neural networks (DNNs), but it typically requires very high communication bandwidth between computational workers (e.g., GPUs) to exchange gradients iteratively. Recently, Top-k sparsification techniques have been proposed to reduce the volume of data to be exchanged among workers and thus alleviate the network pressure. Top-k sparsification can zero-out a significant portion of gradients without impacting the model convergence. However, the sparse gradients should be transferred with their indices, and the irregular indices make the sparse gradients aggregation difficult. Current methods that use AllGather to accumulate the sparse gradients have a communication complexity of O(kP), where P is the number of workers, which is inefficient on low bandwidth networks with a large number of workers. We observe that not all top-k gradients from P workers are needed for the model update, and therefore we propose a novel global Top-k (gTop-k) sparsification mechanism to address the difficulty of aggregating sparse gradients. Specifically, we choose global top-k largest absolute values of gradients from P workers, instead of accumulating all local top-k gradients to update the model in each iteration. The gradient aggregation method based on gTop-k sparsification, namely gTopKAll-Reduce, reduces the communication complexity from O(kP) to O(k log P). Through extensive experiments on different DNNs, we verify that gTop-k S-SGD has nearly consistent convergence performance with S-SGD, and it has only slight degradations on generalization performance. In terms of scaling efficiency, we evaluate gTop-k on a cluster with 32 GPU machines which are interconnected with 1 Gbps Ethernet. The experimental results show that our method achieves 2.7-12 x higher scaling efficiency than S-SGD with dense gradients and 1.1 1.7x improvement than the existing Top-k S-SGD.
引用
收藏
页码:2238 / 2247
页数:10
相关论文
共 50 条
  • [21] A Fast and Efficient Algorithm for Mining Top-k Nodes in Complex Networks
    Dong Liu
    Yun Jing
    Jing Zhao
    Wenjun Wang
    Guojie Song
    Scientific Reports, 7
  • [22] Sort-then-insert: A space efficient and oblivious model aggregation algorithm for top-k sparsification in federated learning
    Wang, Yongzhi
    Gui, Pengfei
    Sookhak, Mehdi
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 158 : 1 - 10
  • [23] Improvement of Top-k query algorithm for moving objects in road networks
    Wang, Zhen
    PROCEEDINGS OF THE 2016 3RD INTERNATIONAL CONFERENCE ON MECHATRONICS AND INFORMATION TECHNOLOGY (ICMIT), 2016, 49 : 344 - 347
  • [24] Top-k Algorithm Based on Extraction
    Li, Lingjuan
    Zeng, Xue
    Lu, Guoyu
    PROCEEDINGS OF THE 2011 2ND INTERNATIONAL CONGRESS ON COMPUTER APPLICATIONS AND COMPUTATIONAL SCIENCE, VOL 1, 2012, 144 : 113 - +
  • [25] A Fast and Efficient Algorithm for Mining Top-k Nodes in Complex Networks
    Liu, Dong
    Jing, Yun
    Zhao, Jing
    Wang, Wenjun
    Song, Guojie
    SCIENTIFIC REPORTS, 2017, 7
  • [26] An iterative algorithm to process the top-k query for the wireless sensor networks
    Li, Guilin
    Gao, Xing
    Liao, Minghong
    Han, Bing
    INTERNATIONAL JOURNAL OF EMBEDDED SYSTEMS, 2015, 7 (01) : 26 - 33
  • [27] Supporting efficient distributed top-k monitoring
    Deng, Bo
    Jia, Yan
    Yang, Shuqiang
    ADVANCES IN WEB-AGE INFORMATION MANAGEMENT, PROCEEDINGS, 2006, 4016 : 496 - 507
  • [28] Efficient processing of distributed top-k queries
    Yu, HL
    Li, HG
    Wu, P
    Agrawal, D
    El Abbadi, A
    DATABASE AND EXPERT SYSTEMS APPLICATIONS, PROCEEDINGS, 2005, 3588 : 65 - 74
  • [29] Lightweight Approximate Top-k for Distributed Settings
    Deolalikar, Vinay
    Eshghi, Kave
    2014 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2014, : 835 - 844
  • [30] Efficient and Fast Distributed Top-k Query Protocol in Wireless Sensor Networks
    Tang, Shao-Jie
    Mao, Xufei
    Li, Xiang-Yang
    2011 19TH IEEE INTERNATIONAL CONFERENCE ON NETWORK PROTOCOLS (ICNP), 2011,