AgrAmplifier: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification

被引:4
|
作者
Gong, Zirui [1 ]
Shen, Liyue [2 ]
Zhang, Yanjun [3 ]
Zhang, Leo Yu [1 ]
Wang, Jingwei [2 ]
Bai, Guangdong [2 ]
Xiang, Yong [4 ]
机构
[1] Griffith Univ, Sch Informat & Commun Technol, Southport, Qld 4215, Australia
[2] Univ Queensland, Sch Informat Technol & Elect Engn, St Lucia, Qld 4067, Australia
[3] Univ Technol Sydney, Sch Comp Sci, Sydney, NSW 2007, Australia
[4] Deakin Univ, Sch Informat Technol, Melbourne, Vic 3125, Australia
关键词
Federated learning; Byzantine-robust aggregation; poisoning attack; explainable AI; BACKDOOR;
D O I
10.1109/TIFS.2023.3333555
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The collaborative nature of federated learning (FL) poses a major threat in the form of manipulation of local training data and local updates, known as the Byzantine poisoning attack. To address this issue, many Byzantine-robust aggregation rules (AGRs) have been proposed to filter out or moderate suspicious local updates uploaded by Byzantine participants. This paper introduces a novel approach called AGRAMPLIFIER, aiming to simultaneously improve robustness, fidelity, and efficiency of the existing AGRs. The core idea of AGRAMPLIFIER is to amplify the "morality" of local updates by identifying the most repressive features of each gradient update, which provides a clearer distinction between malicious and benign updates, consequently improving the detection effect. To achieve this objective, two approaches, namely AGRMP and AGRXAI, are proposed. AGRMP organizes local updates into patches and extracts the largest value from each patch, while AGRXAI leverages explainable AI methods to extract the gradient of the most activated features. By equipping AGRAMPLIFIER with the existing Byzantine-robust mechanisms, we successfully enhance the model robustness, maintaining its fidelity and improving overall efficiency. AGRAMPLIFIER is universally compatible with the existing Byzantine-robust mechanisms. The paper demonstrates its effectiveness by integrating it with all mainstream AGR mechanisms. Extensive evaluations conducted on seven datasets from diverse domains against seven representative poisoning attacks consistently show enhancements in robustness, fidelity, and efficiency, with average gains of 40.08%, 39.18%, and 10.68%, respectively.
引用
收藏
页码:1241 / 1250
页数:10
相关论文
共 50 条
  • [21] Defending Against Data and Model Backdoor Attacks in Federated Learning
    Wang, Hao
    Mu, Xuejiao
    Wang, Dong
    Xu, Qiang
    Li, Kaiju
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (24): : 39276 - 39294
  • [22] A Federated Weighted Learning Algorithm Against Poisoning Attacks
    Yafei Ning
    Zirui Zhang
    Hu Li
    Yuhan Xia
    Ming Li
    International Journal of Computational Intelligence Systems, 18 (1)
  • [23] Data Poisoning Attacks Against Federated Learning Systems
    Tolpegin, Vale
    Truex, Stacey
    Gursoy, Mehmet Emre
    Liu, Ling
    COMPUTER SECURITY - ESORICS 2020, PT I, 2020, 12308 : 480 - 501
  • [24] Defense against local model poisoning attacks to byzantine-robust federated learning
    Lu, Shiwei
    Li, Ruihu
    Chen, Xuan
    Ma, Yuena
    FRONTIERS OF COMPUTER SCIENCE, 2022, 16 (06)
  • [25] Defense against local model poisoning attacks to byzantine-robust federated learning
    LU Shiwei
    LI Ruihu
    CHEN Xuan
    MA Yuena
    Frontiers of Computer Science, 2022, 16 (06)
  • [26] Defense against local model poisoning attacks to byzantine-robust federated learning
    Shiwei Lu
    Ruihu Li
    Xuan Chen
    Yuena Ma
    Frontiers of Computer Science, 2022, 16
  • [27] HashVFL: Defending Against Data Reconstruction Attacks in Vertical Federated Learning
    Qiu, Pengyu
    Zhang, Xuhong
    Ji, Shouling
    Fu, Chong
    Yang, Xing
    Wang, Ting
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 3435 - 3450
  • [28] An adaptive robust defending algorithm against backdoor attacks in federated learning
    Wang, Yongkang
    Zhai, Di-Hua
    He, Yongping
    Xia, Yuanqing
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 143 : 118 - 131
  • [29] Federated Learning: A Comparative Study of Defenses Against Poisoning Attacks
    Carvalho, Ines
    Huff, Kenton
    Gruenwald, Le
    Bernardino, Jorge
    APPLIED SCIENCES-BASEL, 2024, 14 (22):
  • [30] Dynamic defense against byzantine poisoning attacks in federated learning
    Rodriguez-Barroso, Nuria
    Martinez-Camara, Eugenio
    Victoria Luzon, M.
    Herrera, Francisco
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2022, 133 : 1 - 9