Defending Against Data Poisoning Attacks: From Distributed Learning to Federated Learning

被引:4
作者
Tian, Yuchen [1 ]
Zhang, Weizhe [1 ]
Simpson, Andrew [2 ]
Liu, Yang [1 ]
Jiang, Zoe Lin [1 ]
机构
[1] Harbin Inst Technol Shenzhen, Coll Comp Sci & Technol, Shenzhen 518055, Peoples R China
[2] Univ Oxford, Dept Comp Sci, Oxford OX1 3QD, England
关键词
distributed learning; federated learning; data poisoning attacks; AI security;
D O I
10.1093/comjnl/bxab192
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL), a variant of distributed learning (DL), supports the training of a shared model without accessing private data from different sources. Despite its benefits with regard to privacy preservation, FL's distributed nature and privacy constraints make it vulnerable to data poisoning attacks. Existing defenses, primarily designed for DL, are typically not well adapted to FL. In this paper, we study such attacks and defenses. In doing so, we start from the perspective of DL and then give consideration to a real-world FL scenario, with the aim being to explore the requisites of a desirable defense in FL. Our study shows that (i) the batch size used in each training round affects the effectiveness of defenses in DL, (ii) the defenses investigated are somewhat effective and moderately influenced by batch size in FL settings and (iii) the non-IID data makes it more difficult to defend against data poisoning attacks in FL. Based on the findings, we discuss the key challenges and possible directions in defending against such attacks in FL. In addition, we propose detect and suppress the potential outliers(DSPO), a defense against data poisoning attacks in FL scenarios. Our results show that DSPO outperforms other defenses in several cases.
引用
收藏
页码:711 / 726
页数:16
相关论文
共 45 条
[1]  
Alistarh D, 2018, ADV NEUR IN, V31
[2]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[3]  
Baracaldo N., 2017, P 10 ACM WORKSHOP AR, P103, DOI DOI 10.1145/3128572.3140450
[4]  
Bhagoji AN, 2019, PR MACH LEARN RES, V97
[5]  
Biggio B., 2012, POISONING ATTACKS SU
[6]  
Blanchard P, 2017, ADV NEUR IN, V30
[7]  
Cai T, 2013, J MACH LEARN RES, V14, P1837
[8]  
Diakonikolas I, 2019, PR MACH LEARN RES, V97
[9]  
Fang MH, 2020, PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, P1623
[10]   Poisoning Attacks to Graph-Based Recommender Systems [J].
Fang, Minghong ;
Yang, Guolei ;
Gong, Neil Zhenqiang ;
Liu, Jia .
34TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE (ACSAC 2018), 2018, :381-392