Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning

被引:214
作者
Shejwalkar, Virat [1 ]
Houmansadr, Amir [1 ]
机构
[1] Univ Massachusetts Amherst, Amherst, MA 01003 USA
来源
28TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2021) | 2021年
关键词
D O I
10.14722/ndss.2021.24498
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) enables many data owners (e.g., mobile devices) to train a joint ML model (e.g., a next-word prediction classifier) without the need of sharing their private training data. However, FL is known to be susceptible to poisoning attacks by malicious participants (e.g., adversary-owned mobile devices) who aim at hampering the accuracy of the jointly trained model through sending malicious inputs during the federated training process. In this paper, we present a generic framework for model poisoning attacks on FL. We show that our framework leads to poisoning attacks that substantially outperform state-of-the-art model poisoning attacks by large margins. For instance, our attacks result in 1.5 x to 60x higher reductions in the accuracy of FL models compared to previously discovered poisoning attacks. Our work demonstrates that existing Byzantine-robust FL algorithms are significantly more susceptible to model poisoning than previously thought. Motivated by this, we design a defense against FL poisoning, called divide-and-conquer (DnC). We demonstrate that DnC outperforms all existing Byzantine-robust FL algorithms in defeating model poisoning attacks, specifically, it is 2.5 x to 12 x more resilient in our experiments with different datasets and models.
引用
收藏
页数:18
相关论文
共 38 条
[1]  
[Anonymous], 2018, ADV NEURAL INFORM PR
[2]  
Bagdasaryan E., 2018, How to backdoor federated learning. ArXiv eprints
[3]  
Baruch M., 2019, ADV NEURAL INFORM PR
[4]  
Bhagoji AN, 2019, PR MACH LEARN RES, V97
[5]  
Biggio B., 2012, P 29 INT C MACH LEAR
[6]  
Blanchard P, 2017, ADV NEUR IN, V30
[7]  
Caldas S, 2018, ARXIV PREPRINT ARXIV
[8]  
Calo S., 2018, ARXIV PREPRINT ARXIV
[9]  
Chang Hongyan, 2019, ar Xiv preprint ar Xiv:1912.11279
[10]   Learning from Untrusted Data [J].
Charikar, Moses ;
Steinhardt, Jacob ;
Valiant, Gregory .
STOC'17: PROCEEDINGS OF THE 49TH ANNUAL ACM SIGACT SYMPOSIUM ON THEORY OF COMPUTING, 2017, :47-60