Toward Efficient and Certified Recovery From Poisoning Attacks in Federated Learning

被引:1
作者
Jiang, Yu [1 ]
Shen, Jiyuan [1 ]
Liu, Ziyao [2 ]
Tan, Chee Wei [1 ]
Lam, Kwok-Yan [1 ,2 ]
机构
[1] Nanyang Technol Univ, Coll Comp & Data Sci CCDS, Singapore 639798, Singapore
[2] Nanyang Technol Univ, Digital Trust Ctr DTC, Singapore 639798, Singapore
基金
新加坡国家研究基金会;
关键词
Computational modeling; Adaptation models; Servers; Training; Accuracy; Federated learning; Memory management; Data models; Data science; Computational efficiency; poisoning attack; model recovery; machine unlearning;
D O I
10.1109/TIFS.2025.3533907
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated learning (FL) is vulnerable to poisoning attacks, where malicious clients manipulate their updates to affect the global model. Although various methods exist for detecting such clients in FL, identifying malicious clients requires sufficient model updates, and hence by the time malicious clients are detected, FL models have already been poisoned. Thus, a method is needed to recover an accurate global model after malicious clients are identified. Current recovery methods rely on (i) all historical information from participating FL clients and (ii) the initial model unaffected by the malicious clients, both leading to a high demand for storage and computational resources. In this paper, we show that highly effective recovery can still be achieved based on 1) selective historical information rather than all historical information and 2) a historical model that has not been significantly affected by malicious clients rather than the initial model. In this scenario, we can accelerate the recovery speed and decrease memory consumption while maintaining comparable recovery performance. Following this concept, we introduce Crab (Certified Recovery from Poisoning Attacks and Breaches), an efficient and certified recovery method, which relies on selective information storage and adaptive model rollback. Theoretically, we demonstrate that the difference between the global model recovered by Crab and the one recovered by train-from-scratch can be bounded under certain assumptions. Our experiments, performed across four datasets with multiple machine learning models and aggregation methods, involving both untargeted and targeted poisoning attacks, demonstrate that Crab is not only accurate and efficient but also consistently outperforms previous approaches in recovery speed and memory consumption.
引用
收藏
页码:2632 / 2647
页数:16
相关论文
共 65 条
[21]   FL-Defender: Combating targeted attacks in federated learning [J].
Jebreel, Najeeb Moharram ;
Domingo-Ferrer, Josep .
KNOWLEDGE-BASED SYSTEMS, 2023, 260
[22]   Advances and Open Problems in Federated Learning [J].
Kairouz, Peter ;
McMahan, H. Brendan ;
Avent, Brendan ;
Bellet, Aurelien ;
Bennis, Mehdi ;
Bhagoji, Arjun Nitin ;
Bonawitz, Kallista ;
Charles, Zachary ;
Cormode, Graham ;
Cummings, Rachel ;
D'Oliveira, Rafael G. L. ;
Eichner, Hubert ;
El Rouayheb, Salim ;
Evans, David ;
Gardner, Josh ;
Garrett, Zachary ;
Gascon, Adria ;
Ghazi, Badih ;
Gibbons, Phillip B. ;
Gruteser, Marco ;
Harchaoui, Zaid ;
He, Chaoyang ;
He, Lie ;
Huo, Zhouyuan ;
Hutchinson, Ben ;
Hsu, Justin ;
Jaggi, Martin ;
Javidi, Tara ;
Joshi, Gauri ;
Khodak, Mikhail ;
Konecny, Jakub ;
Korolova, Aleksandra ;
Koushanfar, Farinaz ;
Koyejo, Sanmi ;
Lepoint, Tancrede ;
Liu, Yang ;
Mittal, Prateek ;
Mohri, Mehryar ;
Nock, Richard ;
Ozgur, Ayfer ;
Pagh, Rasmus ;
Qi, Hang ;
Ramage, Daniel ;
Raskar, Ramesh ;
Raykova, Mariana ;
Song, Dawn ;
Song, Weikang ;
Stich, Sebastian U. ;
Sun, Ziteng ;
Suresh, Ananda Theertha .
FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2021, 14 (1-2) :1-210
[23]  
Karimireddy SP, 2020, PR MACH LEARN RES, V119
[24]  
Kingma DP., 2014, P 2 INT C LEARN REPR
[25]  
Krizhevsky A., 2009, Learning multiple layers of features from tiny images
[26]   Privacy-Preserving Federated Learning With Malicious Clients and Honest-but-Curious Servers [J].
Le, Junqing ;
Zhang, Di ;
Lei, Xinyu ;
Jiao, Long ;
Zeng, Kai ;
Liao, Xiaofeng .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 :4329-4344
[27]  
Li S, 2020, arXiv
[28]   Invisible Backdoor Attacks on Deep Neural Networks Via Steganography and Regularization [J].
Li, Shaofeng ;
Xue, Minhui ;
Zhao, Benjamin ;
Zhu, Haojin ;
Zhang, Xinpeng .
IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2021, 18 (05) :2088-2105
[29]  
Li T., 2020, Proc. Mach. Learn. Syst., V2, P429
[30]   Federated Learning: Challenges, Methods, and Future Directions [J].
Li, Tian ;
Sahu, Anit Kumar ;
Talwalkar, Ameet ;
Smith, Virginia .
IEEE SIGNAL PROCESSING MAGAZINE, 2020, 37 (03) :50-60