Genuinely distributed Byzantine machine learning

被引:0
作者
El-Mahdi El-Mhamdi
Rachid Guerraoui
Arsany Guirguis
Lê-Nguyên Hoang
Sébastien Rouault
机构
[1] Ecole Polytechnique Fédérale de Lausanne (EPFL),School of Computer and Communication Sciences (IC)
来源
Distributed Computing | 2022年 / 35卷
关键词
Distributed machine learning; Robust machine learning; Byzantine fault tolerance; Byzantine parameter servers;
D O I
暂无
中图分类号
学科分类号
摘要
Machine learning (ML) solutions are nowadays distributed, according to the so-called server/worker architecture. One server holds the model parameters while several workers train the model. Clearly, such architecture is prone to various types of component failures, which can be all encompassed within the spectrum of a Byzantine behavior. Several approaches have been proposed recently to tolerate Byzantine workers. Yet all require trusting a central parameter server. We initiate in this paper the study of the “general” Byzantine-resilient distributed machine learning problem where no individual component is trusted. In particular, we distribute the parameter server computation on several nodes. We show that this problem can be solved in an asynchronous system, despite the presence of 13\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{1}{3}$$\end{document} Byzantine parameter servers (i.e., nps>3fps+1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n_{ps} > 3f_{ps}+1$$\end{document}) and 13\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{1}{3}$$\end{document} Byzantine workers (i.e., nw>3fw\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n_w > 3f_w$$\end{document}), which is asymptotically optimal. We present a new algorithm, ByzSGD, which solves the general Byzantine-resilient distributed machine learning problem by relying on three major schemes. The first, scatter/gather, is a communication scheme whose goal is to bound the maximum drift among models on correct servers. The second, distributed median contraction (DMC), leverages the geometric properties of the median in high dimensional spaces to bring parameters within the correct servers back close to each other, ensuring safe and lively learning. The third, Minimum-diameter averaging (MDA), is a statistically-robust gradient aggregation rule whose goal is to tolerate Byzantine workers. MDA requires a loose bound on the variance of non-Byzantine gradient estimates, compared to existing alternatives [e.g., Krum (Blanchard et al., in: Neural information processing systems, pp 118-128, 2017)]. Interestingly, ByzSGD ensures Byzantine resilience without adding communication rounds (on a normal path), compared to vanilla non-Byzantine alternatives. ByzSGD requires, however, a larger number of messages which, we show, can be reduced if we assume synchrony. We implemented ByzSGD on top of both TensorFlow and PyTorch, and we report on our evaluation results. In particular, we show that ByzSGD guarantees convergence with around 32% overhead compared to vanilla SGD. Furthermore, we show that ByzSGD’s throughput overhead is 24–176% in the synchronous case and 28–220% in the asynchronous case.
引用
收藏
页码:305 / 331
页数:26
相关论文
共 50 条
[21]   Joint Coreset Construction and Quantization for Distributed Machine Learning [J].
Lu, Hanlin ;
Liu, Changchang ;
Wang, Shiqiang ;
He, Ting ;
Narayanan, Vijaykrishnan ;
Chan, Kevin S. ;
Pasteris, Stephen .
2020 IFIP NETWORKING CONFERENCE AND WORKSHOPS (NETWORKING), 2020, :172-180
[22]   Rethinking Transport Layer Design for Distributed Machine Learning [J].
Xia, Jiacheng ;
Zeng, Gaoxiong ;
Zhang, Junxue ;
Wang, Weiyan ;
Bai, Wei ;
Jiang, Junchen ;
Chen, Kai .
PROCEEDINGS OF THE 2019 ASIA-PACIFIC WORKSHOP ON NETWORKING (APNET '19), 2019, :22-28
[23]   Differentially Private Robust ADMM for Distributed Machine Learning [J].
Ding, Jiahao ;
Zhang, Xinyue ;
Chen, Mingsong ;
Xue, Kaiping ;
Zhang, Chi ;
Pan, Miao .
2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, :1302-1311
[24]   An Adaptive Synchronous Parallel Strategy for Distributed Machine Learning [J].
Zhang, Jilin ;
Tu, Hangdi ;
Ren, Yongjian ;
Wan, Jian ;
Zhou, Li ;
Li, Mingwei ;
Wang, Jue .
IEEE ACCESS, 2018, 6 :19222-19230
[25]   SketchML: Accelerating Distributed Machine Learning with Data Sketches [J].
Jiang, Jiawei ;
Fu, Fangcheng ;
Yang, Tong ;
Cui, Bin .
SIGMOD'18: PROCEEDINGS OF THE 2018 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA, 2018, :1269-1284
[26]   MapReduce Tuning to Improve Distributed Machine Learning Performance [J].
Jeon, SungHwan ;
Chung, Haejin ;
Choi, Wonseok ;
Shin, Heeseong ;
Chun, Jonghoon ;
Kim, Jin Taek ;
Nah, Yunmook .
2018 IEEE FIRST INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND KNOWLEDGE ENGINEERING (AIKE), 2018, :198-200
[27]   RAT - Resilient Allreduce Tree for Distributed Machine Learning [J].
Wan, Xinchen ;
Zhang, Hong ;
Wang, Hao ;
Hu, Shuihai ;
Zhang, Junxue ;
Chen, Kai .
PROCEEDINGS OF 2020 4TH ASIA-PACIFIC WORKSHOP ON NETWORKING, APNET 2020, 2020, :52-57
[28]   Research on Selective Combination of Distributed Machine Learning Models [J].
Tsuchiya T. ;
Mochizuki R. ;
Hirose H. ;
Yamada T. ;
Minh Q.T. .
SN Computer Science, 3 (6)
[29]   Data Poison Detection Schemes for Distributed Machine Learning [J].
Chen, Yijin ;
Mao, Yuming ;
Liang, Haoyang ;
Yu, Shui ;
Wei, Yunkai ;
Leng, Supeng .
IEEE ACCESS, 2020, 8 :7442-7454
[30]   A combined priority scheduling method for distributed machine learning [J].
Du, TianTian ;
Xiao, GongYi ;
Chen, Jing ;
Zhang, ChuanFu ;
Sun, Hao ;
Li, Wen ;
Geng, YuDong .
EURASIP JOURNAL ON WIRELESS COMMUNICATIONS AND NETWORKING, 2023, 2023 (01)