Byzantine-Robust Distributed Learning With Compression

被引:3
作者
Zhu, Heng [1 ]
Ling, Qing [2 ,3 ]
机构
[1] Univ Calif San Diego, San Diego, CA 92093 USA
[2] Sun Yat Sen Univ, Guangdong Prov Key Lab Computat Sci, Guangzhou 510275, Peoples R China
[3] Sun Yat Sen Univ, Pazhou Lab, Guangzhou 510275, Peoples R China
来源
IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS | 2023年 / 9卷
关键词
Stochastic processes; Distance learning; Computer aided instruction; Compressors; Convergence; Robustness; Signal processing algorithms; Distributed learning; communication efficiency; Byzantine-robustness; gradient compression; INTERNET; DESCENT;
D O I
10.1109/TSIPN.2023.3265892
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Communication between workers and the master node to collect local stochastic gradients is a key bottleneck in a large-scale distributed learning system. Various recent works have proposed to compress the local stochastic gradients to mitigate the communication overhead. However, robustness to malicious attacks is rarely considered in such a setting. In this work, we investigate the problem of Byzantine-robust compressed distributed learning, where the attacks from Byzantine workers can be arbitrarily malicious. We theoretically point out that different to the attacks-free compressed stochastic gradient descent (SGD), its vanilla combination with geometric median-based robust aggregation seriously suffers from the compression noise in the presence of Byzantine attacks. In light of this observation, we propose to reduce the compression noise with gradient difference compression so as to improve the Byzantine-robustness. We also observe the impact of the intrinsic stochastic noise caused by selecting random samples, and adopt the stochastic average gradient algorithm (SAGA) to gradually eliminate the inner variations of regular workers. We theoretically prove that the proposed algorithm reaches a neighborhood of the optimal solution at a linear convergence rate, and the asymptotic learning error is in the same order as that of the state-of-the-art uncompressed method. Finally, numerical experiments demonstrate the effectiveness of the proposed method.
引用
收藏
页码:280 / 294
页数:15
相关论文
共 49 条
  • [1] Alistarh D, 2017, ADV NEUR IN, V30
  • [2] Bernstein J, 2019, INT C LEARNING REPRE
  • [3] Blanchard P, 2017, ADV NEUR IN, V30
  • [4] Byzantine Resilient Non-Convex SCSG With Distributed Batch Gradient Computations
    Bulusu, Saikiran
    Khanduri, Prashant
    Kafle, Swatantra
    Sharma, Pranay
    Varshney, Pramod K.
    [J]. IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, 2021, 7 : 754 - 766
  • [5] Distributed Gradient Descent Algorithm Robust to an Arbitrary Number of Byzantine Attackers
    Cao, Xinyang
    Lai, Lifeng
    [J]. IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2019, 67 (22) : 5850 - 5864
  • [6] Multi-objective genetic algorithm for energy-efficient hybrid flow shop scheduling with lot streaming
    Chen, Tzu-Li
    Cheng, Chen-Yang
    Chou, Yi-Han
    [J]. ANNALS OF OPERATIONS RESEARCH, 2020, 290 (1-2) : 813 - 836
  • [7] The Internet of Things Secure distributed inference
    Chen, Yuan
    Kar, Soummya
    Moura, Jose M. F.
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2018, 35 (05) : 64 - 75
  • [9] Defazio A, 2014, ADV NEUR IN, V27
  • [10] Dong YJ, 2020, Arxiv, DOI arXiv:2006.09992