Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning

被引:214
作者
Shejwalkar, Virat [1 ]
Houmansadr, Amir [1 ]
机构
[1] Univ Massachusetts Amherst, Amherst, MA 01003 USA
来源
28TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2021) | 2021年
关键词
D O I
10.14722/ndss.2021.24498
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) enables many data owners (e.g., mobile devices) to train a joint ML model (e.g., a next-word prediction classifier) without the need of sharing their private training data. However, FL is known to be susceptible to poisoning attacks by malicious participants (e.g., adversary-owned mobile devices) who aim at hampering the accuracy of the jointly trained model through sending malicious inputs during the federated training process. In this paper, we present a generic framework for model poisoning attacks on FL. We show that our framework leads to poisoning attacks that substantially outperform state-of-the-art model poisoning attacks by large margins. For instance, our attacks result in 1.5 x to 60x higher reductions in the accuracy of FL models compared to previously discovered poisoning attacks. Our work demonstrates that existing Byzantine-robust FL algorithms are significantly more susceptible to model poisoning than previously thought. Motivated by this, we design a defense against FL poisoning, called divide-and-conquer (DnC). We demonstrate that DnC outperforms all existing Byzantine-robust FL algorithms in defeating model poisoning attacks, specifically, it is 2.5 x to 12 x more resilient in our experiments with different datasets and models.
引用
收藏
页数:18
相关论文
共 38 条
[11]   Web Service Recommendation Based on Word Embedding and Topic Model [J].
Chen, Ting ;
Liu, Jianxun ;
Cao, Buqing ;
Peng, Zhenlian ;
Wen, Yiping ;
Li, Run .
2018 IEEE INT CONF ON PARALLEL & DISTRIBUTED PROCESSING WITH APPLICATIONS, UBIQUITOUS COMPUTING & COMMUNICATIONS, BIG DATA & CLOUD COMPUTING, SOCIAL COMPUTING & NETWORKING, SUSTAINABLE COMPUTING & COMMUNICATIONS, 2018, :903-910
[12]  
Cohen G, 2017, IEEE IJCNN, P2921, DOI 10.1109/IJCNN.2017.7966217
[13]  
Dean J., 2012, Advances in neural information processing systems (NeurIPS 2012), P1232
[14]  
Diakonikolas I, 2019, PR MACH LEARN RES, V97
[15]  
Diakonikolas I, 2017, PR MACH LEARN RES, V70
[16]  
Dong YY, 2018, PROCEEDINGS OF 2018 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATION SYSTEMS (ICCS 2018), P35, DOI 10.1109/ICCS.2018.8689243
[17]  
El Mhamdi El Mahdi, 2018, P MACHINE LEARNING R, V80
[18]  
Fang MH, 2020, PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, P1623
[19]  
Ghosh Avishek, 2019, Robust federated learning in a heterogeneous environment
[20]  
He L., 2020, ARXIV200609365