A Distributed Synchronous SGD Algorithm with Global Top-k Sparsification for Low Bandwidth Networks

被引:101
|
作者
Shi, Shaohuai [1 ]
Wang, Qiang [1 ]
Zhao, Kaiyong [1 ]
Tang, Zhenheng [1 ]
Wang, Yuxin [1 ]
Huang, Xiang [2 ]
Chu, Xiaowen [1 ]
机构
[1] Hong Kong Baptist Univ, Dept Comp Sci, Hong Kong, Peoples R China
[2] Shenzhen Dist Block Technol Co Ltd, MassGrid Com, Shenzhen, Peoples R China
来源
2019 39TH IEEE INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2019) | 2019年
关键词
Deep Learning; Stochastic Gradient Descent; Distributed SGD; Gradient Communication; Top-k Sparsification; gTop-k;
D O I
10.1109/ICDCS.2019.00220
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Distributed synchronous stochastic gradient descent (S-SGD) with data parallelism has been widely used in training large-scale deep neural networks (DNNs), but it typically requires very high communication bandwidth between computational workers (e.g., GPUs) to exchange gradients iteratively. Recently, Top-k sparsification techniques have been proposed to reduce the volume of data to be exchanged among workers and thus alleviate the network pressure. Top-k sparsification can zero-out a significant portion of gradients without impacting the model convergence. However, the sparse gradients should be transferred with their indices, and the irregular indices make the sparse gradients aggregation difficult. Current methods that use AllGather to accumulate the sparse gradients have a communication complexity of O(kP), where P is the number of workers, which is inefficient on low bandwidth networks with a large number of workers. We observe that not all top-k gradients from P workers are needed for the model update, and therefore we propose a novel global Top-k (gTop-k) sparsification mechanism to address the difficulty of aggregating sparse gradients. Specifically, we choose global top-k largest absolute values of gradients from P workers, instead of accumulating all local top-k gradients to update the model in each iteration. The gradient aggregation method based on gTop-k sparsification, namely gTopKAll-Reduce, reduces the communication complexity from O(kP) to O(k log P). Through extensive experiments on different DNNs, we verify that gTop-k S-SGD has nearly consistent convergence performance with S-SGD, and it has only slight degradations on generalization performance. In terms of scaling efficiency, we evaluate gTop-k on a cluster with 32 GPU machines which are interconnected with 1 Gbps Ethernet. The experimental results show that our method achieves 2.7-12 x higher scaling efficiency than S-SGD with dense gradients and 1.1 1.7x improvement than the existing Top-k S-SGD.
引用
收藏
页码:2238 / 2247
页数:10
相关论文
共 50 条
  • [41] Top-k Link Recommendation in Social Networks
    Song, Dongjin
    Meyer, David A.
    Tao, Dacheng
    2015 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2015, : 389 - 398
  • [42] Approximate top-k queries in sensor networks
    Patt-Shamir, Boaz
    Shafrir, Allon
    STRUCTURAL INFORMATION AND COMMUNICATION COMPLEXITY, PROCEEDINGS, 2006, 4056 : 319 - +
  • [43] FedSel: Federated SGD Under Local Differential Privacy with Top-k Dimension Selection
    Liu, Ruixuan
    Cao, Yang
    Yoshikawa, Masatoshi
    Chen, Hong
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS (DASFAA 2020), PT I, 2020, 12112 : 485 - 501
  • [44] Distributed multi-dimensional probabilistic Top-k query processing in sensor networks
    Zhu, Jinghua
    Guan, Xuemin
    Huazhong Keji Daxue Xuebao (Ziran Kexue Ban)/Journal of Huazhong University of Science and Technology (Natural Science Edition), 2012, 40 (SUPPL.1): : 389 - 393
  • [45] DOT-K: A Distributed Online Top-K Elements Algorithm using Extreme Value Statistics
    Carey, Nicholas
    Budavari, Tamas
    Ahmad, Yanif
    Szalay, Alexander
    PROCEEDINGS OF THE 2016 IEEE 12TH INTERNATIONAL CONFERENCE ON E-SCIENCE (E-SCIENCE), 2016, : 130 - 136
  • [46] Continuous Top-k Queries in Social Networks
    Alkhouli, Abdulhafiz
    Vodislav, Dan
    Borzic, Boris
    ON THE MOVE TO MEANINGFUL INTERNET SYSTEMS: OTM 2016 CONFERENCES, 2016, 10033 : 24 - 42
  • [47] Processing top-k queries in distributed hash tables
    Akbarinia, Reza
    Pacitti, Esther
    Valduriez, Patrick
    EURO-PAR 2007 PARALLEL PROCESSING, PROCEEDINGS, 2007, 4641 : 489 - +
  • [48] Top-k vectorial aggregation queries in a distributed environment
    Sagy, Guy
    Sharfman, Izchak
    Keren, Daniel
    Schuster, Assaf
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2011, 71 (02) : 302 - 315
  • [49] Towards Generic and Efficient Distributed Top-k Monitoring
    Shi Biao
    Deng Bo
    Liu Li-mei
    Zhou Xian-cheng
    2009 INTERNATIONAL CONFERENCE ON SCALABLE COMPUTING AND COMMUNICATIONS & EIGHTH INTERNATIONAL CONFERENCE ON EMBEDDED COMPUTING, 2009, : 257 - +
  • [50] Efficient Distributed Top-k Query Processing with Caching
    Ryeng, Norvald H.
    Vlachou, Akrivi
    Doulkeridis, Christos
    Norvag, Kjetil
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, PT II, 2011, 6588 : 280 - 295