AgrAmplifier: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification

被引:4
|
作者
Gong, Zirui [1 ]
Shen, Liyue [2 ]
Zhang, Yanjun [3 ]
Zhang, Leo Yu [1 ]
Wang, Jingwei [2 ]
Bai, Guangdong [2 ]
Xiang, Yong [4 ]
机构
[1] Griffith Univ, Sch Informat & Commun Technol, Southport, Qld 4215, Australia
[2] Univ Queensland, Sch Informat Technol & Elect Engn, St Lucia, Qld 4067, Australia
[3] Univ Technol Sydney, Sch Comp Sci, Sydney, NSW 2007, Australia
[4] Deakin Univ, Sch Informat Technol, Melbourne, Vic 3125, Australia
关键词
Federated learning; Byzantine-robust aggregation; poisoning attack; explainable AI; BACKDOOR;
D O I
10.1109/TIFS.2023.3333555
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The collaborative nature of federated learning (FL) poses a major threat in the form of manipulation of local training data and local updates, known as the Byzantine poisoning attack. To address this issue, many Byzantine-robust aggregation rules (AGRs) have been proposed to filter out or moderate suspicious local updates uploaded by Byzantine participants. This paper introduces a novel approach called AGRAMPLIFIER, aiming to simultaneously improve robustness, fidelity, and efficiency of the existing AGRs. The core idea of AGRAMPLIFIER is to amplify the "morality" of local updates by identifying the most repressive features of each gradient update, which provides a clearer distinction between malicious and benign updates, consequently improving the detection effect. To achieve this objective, two approaches, namely AGRMP and AGRXAI, are proposed. AGRMP organizes local updates into patches and extracts the largest value from each patch, while AGRXAI leverages explainable AI methods to extract the gradient of the most activated features. By equipping AGRAMPLIFIER with the existing Byzantine-robust mechanisms, we successfully enhance the model robustness, maintaining its fidelity and improving overall efficiency. AGRAMPLIFIER is universally compatible with the existing Byzantine-robust mechanisms. The paper demonstrates its effectiveness by integrating it with all mainstream AGR mechanisms. Extensive evaluations conducted on seven datasets from diverse domains against seven representative poisoning attacks consistently show enhancements in robustness, fidelity, and efficiency, with average gains of 40.08%, 39.18%, and 10.68%, respectively.
引用
收藏
页码:1241 / 1250
页数:10
相关论文
共 50 条
  • [11] MATFL: Defending Against Synergetic Attacks in Federated Learning
    Yang, Wen
    Peng, Luyao
    Tang, Xiangyun
    Weng, Yu
    2023 IEEE INTERNATIONAL CONFERENCES ON INTERNET OF THINGS, ITHINGS IEEE GREEN COMPUTING AND COMMUNICATIONS, GREENCOM IEEE CYBER, PHYSICAL AND SOCIAL COMPUTING, CPSCOM IEEE SMART DATA, SMARTDATA AND IEEE CONGRESS ON CYBERMATICS,CYBERMATICS, 2024, : 313 - 319
  • [12] Defending Against Byzantine Attacks in Quantum Federated Learning
    Xia, Qi
    Tao, Zeyi
    Li, Qun
    2021 17TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING (MSN 2021), 2021, : 145 - 152
  • [13] MATFL: Defending Against Synergetic Attacks in Federated Learning
    Yang, Wen
    Peng, Luyao
    Tang, Xiangyun
    Weng, Yu
    Proceedings - IEEE Congress on Cybermatics: 2023 IEEE International Conferences on Internet of Things, iThings 2023, IEEE Green Computing and Communications, GreenCom 2023, IEEE Cyber, Physical and Social Computing, CPSCom 2023 and IEEE Smart Data, SmartData 2023, 2023, : 313 - 319
  • [14] FLARE: Defending Federated Learning against Model Poisoning Attacks via Latent Space Representations
    Wang, Ning
    Xiao, Yang
    Chen, Yimin
    Hu, Yang
    Lou, Wenjing
    Hou, Y. Thomas
    ASIA CCS'22: PROCEEDINGS OF THE 2022 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2022, : 946 - 958
  • [15] FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients
    Zhang, Zaixi
    Cao, Xiaoyu
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 2545 - 2555
  • [16] FLARE: Defending Federated Learning against Model Poisoning Attacks via Latent Space Representations
    Wang, Ning
    Xiao, Yang
    Chen, Yimin
    Hu, Yang
    Lou, Wenjing
    Hou, Y. Thomas
    ASIA CCS 2022 - Proceedings of the 2022 ACM Asia Conference on Computer and Communications Security, 2022, : 946 - 958
  • [17] Defending against Adversarial Attacks in Federated Learning on Metric Learning Model
    Gu, Zhipin
    Shi, Jiangyong
    Yang, Yuexiang
    He, Liangzhong
    2023 IEEE 22ND INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, BIGDATASE, CSE, EUC, ISCI 2023, 2024, : 197 - 206
  • [18] DEFENDING AGAINST BACKDOOR ATTACKS IN FEDERATED LEARNING WITH DIFFERENTIAL PRIVACY
    Miao, Lu
    Yang, Wei
    Hu, Rong
    Li, Lu
    Huang, Liusheng
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2999 - 3003
  • [19] FedPD: Defending federated prototype learning against backdoor attacks
    Tan, Zhou
    Cai, Jianping
    Li, De
    Lian, Puwei
    Liu, Ximeng
    Che, Yan
    NEURAL NETWORKS, 2025, 184
  • [20] RoPE: Defending against backdoor attacks in federated learning systems
    Wang, Yongkang
    Zhai, Di-Hua
    Xia, Yuanqing
    KNOWLEDGE-BASED SYSTEMS, 2024, 293