AgrAmplifier: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification

被引:4
|
作者
Gong, Zirui [1 ]
Shen, Liyue [2 ]
Zhang, Yanjun [3 ]
Zhang, Leo Yu [1 ]
Wang, Jingwei [2 ]
Bai, Guangdong [2 ]
Xiang, Yong [4 ]
机构
[1] Griffith Univ, Sch Informat & Commun Technol, Southport, Qld 4215, Australia
[2] Univ Queensland, Sch Informat Technol & Elect Engn, St Lucia, Qld 4067, Australia
[3] Univ Technol Sydney, Sch Comp Sci, Sydney, NSW 2007, Australia
[4] Deakin Univ, Sch Informat Technol, Melbourne, Vic 3125, Australia
关键词
Federated learning; Byzantine-robust aggregation; poisoning attack; explainable AI; BACKDOOR;
D O I
10.1109/TIFS.2023.3333555
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The collaborative nature of federated learning (FL) poses a major threat in the form of manipulation of local training data and local updates, known as the Byzantine poisoning attack. To address this issue, many Byzantine-robust aggregation rules (AGRs) have been proposed to filter out or moderate suspicious local updates uploaded by Byzantine participants. This paper introduces a novel approach called AGRAMPLIFIER, aiming to simultaneously improve robustness, fidelity, and efficiency of the existing AGRs. The core idea of AGRAMPLIFIER is to amplify the "morality" of local updates by identifying the most repressive features of each gradient update, which provides a clearer distinction between malicious and benign updates, consequently improving the detection effect. To achieve this objective, two approaches, namely AGRMP and AGRXAI, are proposed. AGRMP organizes local updates into patches and extracts the largest value from each patch, while AGRXAI leverages explainable AI methods to extract the gradient of the most activated features. By equipping AGRAMPLIFIER with the existing Byzantine-robust mechanisms, we successfully enhance the model robustness, maintaining its fidelity and improving overall efficiency. AGRAMPLIFIER is universally compatible with the existing Byzantine-robust mechanisms. The paper demonstrates its effectiveness by integrating it with all mainstream AGR mechanisms. Extensive evaluations conducted on seven datasets from diverse domains against seven representative poisoning attacks consistently show enhancements in robustness, fidelity, and efficiency, with average gains of 40.08%, 39.18%, and 10.68%, respectively.
引用
收藏
页码:1241 / 1250
页数:10
相关论文
共 50 条
  • [31] FLCert: Provably Secure Federated Learning Against Poisoning Attacks
    Cao, Xiaoyu
    Zhang, Zaixi
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 3691 - 3705
  • [32] Secure and verifiable federated learning against poisoning attacks in IoMT
    Niu, Shufen
    Zhou, Xusheng
    Wang, Ning
    Kong, Weiying
    Chen, Lihua
    COMPUTERS & ELECTRICAL ENGINEERING, 2025, 122
  • [33] AUROR: Defending Against Poisoning Attacks in Collaborative Deep Learning Systems
    Shen, Shiqi
    Tople, Shruti
    Saxena, Prateek
    32ND ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE (ACSAC 2016), 2016, : 508 - 519
  • [34] Defending Against Data Reconstruction Attacks in Federated Learning: An Information Theory Approach
    Tan, Qi
    Li, Qi
    Zhao, Yi
    Liu, Zhuotao
    Guo, Xiaobing
    Xu, Ke
    PROCEEDINGS OF THE 33RD USENIX SECURITY SYMPOSIUM, SECURITY 2024, 2024, : 325 - 342
  • [35] Defending against Membership Inference Attacks in Federated learning via Adversarial Example
    Xie, Yuanyuan
    Chen, Bing
    Zhang, Jiale
    Wu, Di
    2021 17TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING (MSN 2021), 2021, : 153 - 160
  • [36] LDP-Purifier: Defending against Poisoning Attacks in Local Differential Privacy
    Wang, Leixia
    Yee, Qingqing
    Hu, Haibo
    Meng, Xiaofeng
    Huang, Kai
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2024, PT IV, 2024, 14853 : 221 - 231
  • [37] Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
    Fang, Minghong
    Cao, Xiaoyu
    Jia, Jinyuan
    Gong, Neil Nenqiang
    PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, 2020, : 1623 - 1640
  • [38] Perception Poisoning Attacks in Federated Learning
    Chow, Ka-Ho
    Liu, Ling
    2021 THIRD IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS (TPS-ISA 2021), 2021, : 146 - 155
  • [39] Poisoning Attacks in Federated Learning: A Survey
    Xia, Geming
    Chen, Jian
    Yu, Chaodong
    Ma, Jun
    IEEE ACCESS, 2023, 11 : 10708 - 10722
  • [40] Mitigating Poisoning Attacks in Federated Learning
    Ganjoo, Romit
    Ganjoo, Mehak
    Patil, Madhura
    INNOVATIVE DATA COMMUNICATION TECHNOLOGIES AND APPLICATION, ICIDCA 2021, 2022, 96 : 687 - 699