Shielding Federated Learning: A New Attack Approach and Its Defense

被引:15
作者
Wan, Wei [1 ]
Lu, Jianrong [2 ]
Hu, Shengshan [2 ]
Zhang, Leo Yu [3 ]
Pei, Xiaobing [1 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Software Engn, Wuhan 430074, Hubei, Peoples R China
[2] Huazhong Univ Sci & Technol, Sch Cyber Sci & Engn, Wuhan 430074, Peoples R China
[3] Deakin Univ, Sch Informat Technol, Geelong, Vic 3216, Australia
来源
2021 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC) | 2021年
基金
中国国家自然科学基金;
关键词
federated learning; distributed computation; security;
D O I
10.1109/WCNC49053.2021.9417334
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) is a newly emerging distributed learning framework that is communication-efficient with user privacy guarantee. Wireless end-user devices can collaboratively train a global model while keeping their local training data private. Nevertheless, recent studies show that FL is highly susceptible to attacks from malicious users since the server cannot directly access and audit the user's local training data. In this work, we identify a new kind of attack surface that is much easier to be carried out while remaining a high attack success rate. By exploiting the inherent flaw of the weight assignment strategy in the standard federated learning process, our attack can bypass the existing defense methods and damage the performance of the global model effectively. We then propose a new density-based detection strategy to defend against such attack by modeling the problem as anomaly detection to effectively detect anomalous updates. Experimental results on two typical datasets, MNIST and CIFAR-10, show that our attack can significantly affect the convergence of the aggregated model and reduce the accuracy of the global model. This holds true even the state-of-the-art defense strategies are deployed, while our newly proposed defense can effectively mitigate such attack.
引用
收藏
页数:7
相关论文
共 19 条
[1]   Towards Effective Device-Aware Federated Learning [J].
Anelli, Vito Walter ;
Deldjoo, Yashar ;
Di Noia, Tommaso ;
Ferrara, Antonio .
ADVANCES IN ARTIFICIAL INTELLIGENCE, AI*IA 2019, 2019, 11946 :477-491
[2]  
[Anonymous], 2017, Google
[3]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[4]  
Baruch M, 2019, ADV NEUR IN, V32
[5]  
Blanchard P, 2017, ADV NEUR IN, V30
[6]   LOF: Identifying density-based local outliers [J].
Breunig, MM ;
Kriegel, HP ;
Ng, RT ;
Sander, J .
SIGMOD RECORD, 2000, 29 (02) :93-104
[7]  
[陈宇飞 Chen Yufei], 2019, [计算机研究与发展, Journal of Computer Research and Development], V56, P2135
[8]   A Backdoor Attack Against LSTM-Based Text Classification Systems [J].
Dai, Jiazhu ;
Chen, Chuanshuai ;
Li, Yufeng .
IEEE ACCESS, 2019, 7 :138872-138878
[9]  
Fung C., 2018, ABS180804866
[10]  
Guerraoui R., 2018, PR MACH LEARN RES, P3521