AFLGuard: Byzantine-robust Asynchronous Federated Learning

被引:6
作者
Fang, Minghong [1 ]
Liu, Jia [1 ]
Gong, Neil Zhenqiang [2 ]
Bentley, Elizabeth S. [3 ]
机构
[1] Ohio State Univ, Columbus, OH 43210 USA
[2] Duke Univ, Durham, NC USA
[3] Air Force Res Lab, Rome, NY USA
来源
PROCEEDINGS OF THE 38TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2022 | 2022年
关键词
Federated Learning; Poisoning Attacks; Byzantine Robustness;
D O I
10.1145/3564625.3567991
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) is an emerging machine learning paradigm, in which clients jointly learn a model with the help of a cloud server. A fundamental challenge of FL is that the clients are often heterogeneous, e.g., they have different computing powers, and thus the clients may send model updates to the server with substantially different delays. Asynchronous FL aims to address this challenge by enabling the server to update the model once any client's model update reaches it without waiting for other clients' model updates. However, like synchronous FL, asynchronous FL is also vulnerable to poisoning attacks, in which malicious clients manipulate the model via poisoning their local data and/or model updates sent to the server. Byzantine-robust FL aims to defend against poisoning attacks. In particular, Byzantine-robust FL can learn an accurate model even if some clients are malicious and have Byzantine behaviors. However, most existing studies on Byzantine-robust FL focused on synchronous FL, leaving asynchronous FL largely unexplored. In this work, we bridge this gap by proposing AFLGuard, a Byzantine-robust asynchronous FL method. We show that, both theoretically and empirically, AFLGuard is robust against various existing and adaptive poisoning attacks (both untargeted and targeted). Moreover, AFLGuard outperforms existing Byzantine-robust asynchronous FL methods.
引用
收藏
页码:632 / 646
页数:15
相关论文
共 58 条
[1]  
Abadi M., 2016, arXiv
[2]  
Anguita D., 2013, EUR S ART NEUR NETW
[3]  
[Anonymous], Federated Learning for Communication-Efficient Col
[4]  
[Anonymous], Making federated learning faster and more scalable: A new asynchronous method
[5]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[6]  
Bhagoji AN, 2019, PR MACH LEARN RES, V97
[7]  
Blanchard P, 2017, ADV NEUR IN, V30
[8]  
Bubeck S, 2015, Arxiv, DOI arXiv:1405.4980
[9]   MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients [J].
Cao, Xiaoyu ;
Gong, Neil Zhenqiang .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, :3395-3403
[10]   FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping [J].
Cao, Xiaoyu ;
Fang, Minghong ;
Liu, Jia ;
Gong, Neil Zhenqiang .
28TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2021), 2021,