FLCert: Provably Secure Federated Learning Against Poisoning Attacks

被引:33
作者
Cao, Xiaoyu [1 ]
Zhang, Zaixi [2 ]
Jia, Jinyuan [3 ]
Gong, Neil Zhenqiang [4 ]
机构
[1] Meta Platforms, Menlo Pk, CA 94025 USA
[2] Univ Sci & Technol China, Sch Comp Sci & Technol, Hefei 230026, Anhui, Peoples R China
[3] Univ Illinois, Dept Comp Sci, Urbana, IL 27708 USA
[4] Duke Univ, Dept Elect & Comp Engn, 413 Wilkinson Bldg, Durham, NC 27708 USA
基金
美国国家科学基金会;
关键词
Computational modeling; Servers; Security; Data models; Training; Predictive models; Training data; Federated learning; provable security; poisoning attack; ensemble method;
D O I
10.1109/TIFS.2022.3212174
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Due to its distributed nature, federated learning is vulnerable to poisoning attacks, in which malicious clients poison the training process via manipulating their local training data and/or local model updates sent to the cloud server, such that the poisoned global model misclassifies many indiscriminate test inputs or attacker-chosen ones. Existing defenses mainly leverage Byzantine-robust federated learning methods or detect malicious clients. However, these defenses do not have provable security guarantees against poisoning attacks and may be vulnerable to more advanced attacks. In this work, we aim to bridge the gap by proposing FLCert, an ensemble federated learning framework, that is provably secure against poisoning attacks with a bounded number of malicious clients. Our key idea is to divide the clients into groups, learn a global model for each group of clients using any existing federated learning method, and take a majority vote among the global models to classify a test input. Specifically, we consider two methods to group the clients and propose two variants of FLCert correspondingly, i.e., FLCert-P that randomly samples clients in each group, and FLCert-D that divides clients to disjoint groups deterministically. Our extensive experiments on multiple datasets show that the label predicted by our FLCert for a test input is provably unaffected by a bounded number of malicious clients, no matter what poisoning attacks they use.
引用
收藏
页码:3691 / 3705
页数:15
相关论文
共 32 条
  • [1] Anguita D., 2013, EUR S ART NEUR NETW
  • [2] Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
  • [3] Bhagoji AN, 2019, PR MACH LEARN RES, V97
  • [4] BLANCHARD P, 2017, P NEUR INF PROC SYST, P1
  • [5] Cao X., 2022, P IEEECVF C COMPUTER, P3396, DOI [10.48550/arXiv.2203.08669, DOI 10.48550/ARXIV.2203.08669]
  • [6] FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
    Cao, Xiaoyu
    Fang, Minghong
    Liu, Jia
    Gong, Neil Zhenqiang
    [J]. 28TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2021), 2021,
  • [7] De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks
    Chen, Jian
    Zhang, Xuxin
    Zhang, Rui
    Wang, Chen
    Liu, Ling
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 (16) : 3412 - 3425
  • [8] The use of confidence or fiducial limits illustrated in the case of the binomial.
    Clopper, CJ
    Pearson, ES
    [J]. BIOMETRIKA, 1934, 26 : 404 - 413
  • [9] Fang MH, 2020, PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, P1623
  • [10] Gu T., 2017, IEEE ACCESS