Efficient Byzantine-robust distributed inference with regularization: A trade-off between compression and adversary

被引:0
作者
Zhou, Xingcai [1 ]
Yang, Guang [1 ]
Chang, Le [1 ]
Lv, Shaogao [1 ]
机构
[1] Nanjing Audit Univ, Sch Stat & Math, Nanjing 211815, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
Byzantine-robust; Distributed learning; Communication-efficient; Compression; Adversary; Statistical error rate; COMMUNICATION-EFFICIENT; VARIABLE SELECTION;
D O I
10.1016/j.ins.2024.121010
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In large-scale distributed learning, the direct application of traditional inference is often not feasible, because it may contain multiple themes, such as communication costs, privacy issues, and Byzantine failures. Nowadays, the internet is vulnerable to attacks, and Byzantine failures frequently occur. For copying with Byzantine failures, the paper develops two Byzantine- robust distributed learning algorithms under a framework of communication-efficient surrogate likelihood. In our algorithms, we adopt the delta-approximate compressors, including sign-based operator and top(k) sparsification, to improve communication efficiency, and an unsophisticated thresholding of local gradient norms to guard against Byzantine failures. For accelerating convergence and achieving an optimal statistical error rate, error feedback is exploited in the second algorithm. The two algorithms are robust to arbitrary adversaries, although Byzantine workers don't adhere to the mandated compression mechanism. We explicitly establish statistical error rates, which imply that our algorithms don't sacrifice the quality of learning, and attain the order-optimal under some settings. In addition, we provide a trade-off between compression and adversary in the presence of Byzantine worker machines. Extensive numerical experiments validate our theoretical results and demonstrate a good performance of our algorithms.
引用
收藏
页数:19
相关论文
共 48 条
  • [1] Alistarh D, 2018, ADV NEUR IN, V31
  • [2] Alistarh D, 2017, ADV NEUR IN, V30
  • [3] Byzantine-resilient distributed state estimation: A min-switching approach
    An, Liwei
    Yang, Guang-Hong
    [J]. AUTOMATICA, 2021, 129
  • [4] Distributed secure state estimation for cyber-physical systems under sensor attacks
    An, Liwei
    Yang, Guang-Hong
    [J]. AUTOMATICA, 2019, 107 : 526 - 538
  • [5] Bao YJ, 2022, PR MACH LEARN RES, V180, P129
  • [6] DISTRIBUTED TESTING AND ESTIMATION UNDER SPARSE HIGH DIMENSIONAL MODELS
    Battey, Heather
    Fan, Jianqing
    Liu, Han
    Lu, Junwei
    Zhu, Ziwei
    [J]. ANNALS OF STATISTICS, 2018, 46 (03) : 1352 - 1382
  • [7] Bernstein J., 2018, arXiv
  • [8] Bernstein J, 2018, Arxiv, DOI arXiv:1802.04434
  • [9] Blanchard P, 2017, ADV NEUR IN, V30