FEDGUARD: Selective Parameter Aggregation for Poisoning Attack Mitigation in Federated Learning

被引:3
作者
Chelli, Melvin [1 ]
Prigent, Cedric [2 ]
Schubotz, Rene [1 ]
Costan, Alexandru [2 ]
Antoniu, Gabriel [2 ]
Cudennec, Loic [3 ]
Slusallek, Philipp [1 ]
机构
[1] Deutsch Forschungszentrum Kunstliche Intelligenz, Saarland Informat Campus, Saarbrucken, Germany
[2] Univ Rennes, CNRS, INRIA, Rennes, France
[3] DGA Maitrise IInformat, Rennes, France
来源
2023 IEEE INTERNATIONAL CONFERENCE ON CLUSTER COMPUTING, CLUSTER | 2023年
关键词
federated learning; malicious peer detection; robust federated learning; adversarial attacks; generative models; ROBUSTNESS;
D O I
10.1109/CLUSTER52292.2023.00014
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Minimizing the attack surface of Federated Learning (FL) systems is a field of active research. FL turns out to be highly vulnerable to various threats coming from the edge of the network. Current approaches rely on robust aggregation, anomaly detection and generative models for defending against poisoning attacks. Yet, they either have limited defensive capabilities due to their underlying design or are impractical to use as they rely on constraining building blocks. We introduce FEDGUARD, a novel FL framework that utilizes the generative capabilities of Conditional Variational AutoEncoders (CVAE) to effectively defend against poisoning attacks with tuneable overhead in communication and computation. Whilst the idea of hardening a FL system using generative models is not entirely new, FEDGUARD's original contribution is in its selective parameter aggregation operator with parameter selection being driven by synthetic validation data sampled from the CVAEs trained locally by each participating party. Experimental evaluations in a 100-client setup demonstrates FEDGUARD to be more effective than previous approaches against several types of attacks (label and sign flipping, additive noise, same value attacks). FEDGUARD successfully defends in scenarios with up to 50% malicious peers where other strategies fail. In addition, FEDGUARD does not require auxiliary datasets or centralized (pre-) training. It provides resilience against poisoning attacks from the very first round of federated training.
引用
收藏
页码:72 / 81
页数:10
相关论文
共 32 条
[1]   CONTRA: Defending Against Poisoning Attacks in Federated Learning [J].
Awan, Sana ;
Luo, Bo ;
Li, Fengjun .
COMPUTER SECURITY - ESORICS 2021, PT I, 2021, 12972 :455-475
[2]  
Blanchard P, 2017, ADV NEUR IN, V30
[3]  
Brendan McMahan H., 2016, arXiv
[4]   Boosting Decision-Based Black-Box Adversarial Attacks with Random Sign Flip [J].
Chen, Weilun ;
Zhang, Zhaoxiang ;
Hu, Xiaolin ;
Wu, Baoyuan .
COMPUTER VISION - ECCV 2020, PT XV, 2020, 12360 :276-293
[5]  
Dongcheng Li, 2021, 2021 8th International Conference on Dependable Systems and Their Applications (DSA), P551, DOI 10.1109/DSA52907.2021.00081
[6]  
Fang MH, 2021, Arxiv, DOI [arXiv:1911.11815, 10.48550/arXiv.1911.11815, DOI 10.48550/ARXIV.1911.11815]
[7]  
Fatima E., 2021, COMPAS 2021 PARALLEL
[8]   Federated learning for COVID-19 screening from Chest X-ray images [J].
Feki, Ines ;
Ammar, Sourour ;
Kessentini, Yousri ;
Muhammad, Khan .
APPLIED SOFT COMPUTING, 2021, 106 (106)
[9]   Generative Adversarial Networks [J].
Goodfellow, Ian ;
Pouget-Abadie, Jean ;
Mirza, Mehdi ;
Xu, Bing ;
Warde-Farley, David ;
Ozair, Sherjil ;
Courville, Aaron ;
Bengio, Yoshua .
COMMUNICATIONS OF THE ACM, 2020, 63 (11) :139-144
[10]   Detecting Malicious Model Updates from Federated Learning on Conditional Variational Autoencoder [J].
Gu, Zhipin ;
Yang, Yuexiang .
2021 IEEE 35TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM (IPDPS), 2021, :671-680