Genuinely distributed Byzantine machine learning

被引:0
作者
El-Mahdi El-Mhamdi
Rachid Guerraoui
Arsany Guirguis
Lê-Nguyên Hoang
Sébastien Rouault
机构
[1] Ecole Polytechnique Fédérale de Lausanne (EPFL),School of Computer and Communication Sciences (IC)
来源
Distributed Computing | 2022年 / 35卷
关键词
Distributed machine learning; Robust machine learning; Byzantine fault tolerance; Byzantine parameter servers;
D O I
暂无
中图分类号
学科分类号
摘要
Machine learning (ML) solutions are nowadays distributed, according to the so-called server/worker architecture. One server holds the model parameters while several workers train the model. Clearly, such architecture is prone to various types of component failures, which can be all encompassed within the spectrum of a Byzantine behavior. Several approaches have been proposed recently to tolerate Byzantine workers. Yet all require trusting a central parameter server. We initiate in this paper the study of the “general” Byzantine-resilient distributed machine learning problem where no individual component is trusted. In particular, we distribute the parameter server computation on several nodes. We show that this problem can be solved in an asynchronous system, despite the presence of 13\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{1}{3}$$\end{document} Byzantine parameter servers (i.e., nps>3fps+1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n_{ps} > 3f_{ps}+1$$\end{document}) and 13\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{1}{3}$$\end{document} Byzantine workers (i.e., nw>3fw\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n_w > 3f_w$$\end{document}), which is asymptotically optimal. We present a new algorithm, ByzSGD, which solves the general Byzantine-resilient distributed machine learning problem by relying on three major schemes. The first, scatter/gather, is a communication scheme whose goal is to bound the maximum drift among models on correct servers. The second, distributed median contraction (DMC), leverages the geometric properties of the median in high dimensional spaces to bring parameters within the correct servers back close to each other, ensuring safe and lively learning. The third, Minimum-diameter averaging (MDA), is a statistically-robust gradient aggregation rule whose goal is to tolerate Byzantine workers. MDA requires a loose bound on the variance of non-Byzantine gradient estimates, compared to existing alternatives [e.g., Krum (Blanchard et al., in: Neural information processing systems, pp 118-128, 2017)]. Interestingly, ByzSGD ensures Byzantine resilience without adding communication rounds (on a normal path), compared to vanilla non-Byzantine alternatives. ByzSGD requires, however, a larger number of messages which, we show, can be reduced if we assume synchrony. We implemented ByzSGD on top of both TensorFlow and PyTorch, and we report on our evaluation results. In particular, we show that ByzSGD guarantees convergence with around 32% overhead compared to vanilla SGD. Furthermore, we show that ByzSGD’s throughput overhead is 24–176% in the synchronous case and 28–220% in the asynchronous case.
引用
收藏
页码:305 / 331
页数:26
相关论文
共 50 条
  • [1] Genuinely distributed Byzantine machine learning
    El-Mhamdi, El-Mahdi
    Guerraoui, Rachid
    Guirguis, Arsany
    Hoang, Le-Nguyen
    Rouault, Sebastien
    DISTRIBUTED COMPUTING, 2022, 35 (04) : 305 - 331
  • [2] Byzantine fault tolerance in distributed machine learning: a survey
    Bouhata, Djamila
    Moumen, Hamouma
    Mazari, Jocelyn Ahmed
    Bounceur, Ahcene
    JOURNAL OF EXPERIMENTAL & THEORETICAL ARTIFICIAL INTELLIGENCE, 2024,
  • [3] GARFIELD: System Support for Byzantine Machine Learning (Regular Paper)
    Guerraoui, Rachid
    Guirguis, Arsany
    Plassmann, Jeremy
    Ragot, Anton
    Rouault, Sebastien
    51ST ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS (DSN 2021), 2021, : 39 - 51
  • [4] SLC: A Permissioned Blockchain for Secure Distributed Machine Learning against Byzantine Attacks
    Liang, Lun
    Cao, Xianghui
    Zhang, Jun
    Sun, Changyin
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 7073 - 7078
  • [5] A Survey on Distributed Machine Learning
    Verbraeken, Joost
    Wolting, Matthijs
    Katzy, Jonathan
    Kloppenburg, Jeroen
    Verbelen, Tim
    Rellermeyer, Jan S.
    ACM COMPUTING SURVEYS, 2020, 53 (02)
  • [6] Distributed machine learning in networks by consensus
    Georgopoulos, Leonidas
    Hasler, Martin
    NEUROCOMPUTING, 2014, 124 : 2 - 12
  • [7] Distributed Machine Learning on IAAS Clouds
    Ta Nguyen Binh Duong
    Nguyen Quang Sang
    PROCEEDINGS OF 2018 5TH IEEE INTERNATIONAL CONFERENCE ON CLOUD COMPUTING AND INTELLIGENCE SYSTEMS (CCIS), 2018, : 58 - 62
  • [8] Privacy preserving distributed machine learning with federated learning
    Chamikara, M. A. P.
    Bertok, P.
    Khalil, I.
    Liu, D.
    Camtepe, S.
    COMPUTER COMMUNICATIONS, 2021, 171 : 112 - 125
  • [9] From distributed machine to distributed deep learning: a comprehensive survey
    Dehghani, Mohammad
    Yazdanparast, Zahra
    JOURNAL OF BIG DATA, 2023, 10 (01)
  • [10] From distributed machine to distributed deep learning: a comprehensive survey
    Mohammad Dehghani
    Zahra Yazdanparast
    Journal of Big Data, 10