AgrAmplifier: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification

被引:4
|
作者
Gong, Zirui [1 ]
Shen, Liyue [2 ]
Zhang, Yanjun [3 ]
Zhang, Leo Yu [1 ]
Wang, Jingwei [2 ]
Bai, Guangdong [2 ]
Xiang, Yong [4 ]
机构
[1] Griffith Univ, Sch Informat & Commun Technol, Southport, Qld 4215, Australia
[2] Univ Queensland, Sch Informat Technol & Elect Engn, St Lucia, Qld 4067, Australia
[3] Univ Technol Sydney, Sch Comp Sci, Sydney, NSW 2007, Australia
[4] Deakin Univ, Sch Informat Technol, Melbourne, Vic 3125, Australia
关键词
Federated learning; Byzantine-robust aggregation; poisoning attack; explainable AI; BACKDOOR;
D O I
10.1109/TIFS.2023.3333555
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The collaborative nature of federated learning (FL) poses a major threat in the form of manipulation of local training data and local updates, known as the Byzantine poisoning attack. To address this issue, many Byzantine-robust aggregation rules (AGRs) have been proposed to filter out or moderate suspicious local updates uploaded by Byzantine participants. This paper introduces a novel approach called AGRAMPLIFIER, aiming to simultaneously improve robustness, fidelity, and efficiency of the existing AGRs. The core idea of AGRAMPLIFIER is to amplify the "morality" of local updates by identifying the most repressive features of each gradient update, which provides a clearer distinction between malicious and benign updates, consequently improving the detection effect. To achieve this objective, two approaches, namely AGRMP and AGRXAI, are proposed. AGRMP organizes local updates into patches and extracts the largest value from each patch, while AGRXAI leverages explainable AI methods to extract the gradient of the most activated features. By equipping AGRAMPLIFIER with the existing Byzantine-robust mechanisms, we successfully enhance the model robustness, maintaining its fidelity and improving overall efficiency. AGRAMPLIFIER is universally compatible with the existing Byzantine-robust mechanisms. The paper demonstrates its effectiveness by integrating it with all mainstream AGR mechanisms. Extensive evaluations conducted on seven datasets from diverse domains against seven representative poisoning attacks consistently show enhancements in robustness, fidelity, and efficiency, with average gains of 40.08%, 39.18%, and 10.68%, respectively.
引用
收藏
页码:1241 / 1250
页数:10
相关论文
共 50 条
  • [1] CONTRA: Defending Against Poisoning Attacks in Federated Learning
    Awan, Sana
    Luo, Bo
    Li, Fengjun
    COMPUTER SECURITY - ESORICS 2021, PT I, 2021, 12972 : 455 - 475
  • [2] Defending Against Targeted Poisoning Attacks in Federated Learning
    Erbil, Pinar
    Gursoy, M. Emre
    2022 IEEE 4TH INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS, AND APPLICATIONS, TPS-ISA, 2022, : 198 - 207
  • [3] Defending Against Poisoning Attacks in Federated Learning with Blockchain
    Dong N.
    Wang Z.
    Sun J.
    Kampffmeyer M.
    Knottenbelt W.
    Xing E.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (07): : 1 - 13
  • [4] DPFLA: Defending Private Federated Learning Against Poisoning Attacks
    Feng, Xia
    Cheng, Wenhao
    Cao, Chunjie
    Wang, Liangmin
    Sheng, Victor S.
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (04) : 1480 - 1491
  • [5] Defending against Poisoning Backdoor Attacks on Federated Meta-learning
    Chen, Chien-Lun
    Babakniya, Sara
    Paolieri, Marco
    Golubchik, Leana
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (05)
  • [6] Defending Against Data Poisoning Attacks: From Distributed Learning to Federated Learning
    Tian, Yuchen
    Zhang, Weizhe
    Simpson, Andrew
    Liu, Yang
    Jiang, Zoe Lin
    COMPUTER JOURNAL, 2023, 66 (03): : 711 - 726
  • [7] A Blockchain-based Federated Learning Framework for Defending Against Poisoning Attacks in IIOT
    Xie, Jiale
    Feng, Libo
    Fang, Fake
    Yuan, Zehui
    Deng, Xian
    Liu, Junhong
    Wu, Peng
    Li, Zhuo
    PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 2442 - 2447
  • [8] Defending against Poisoning Attacks in Federated Learning from a Spatial-temporal Perspective
    Gu, Zhipin
    Shi, Jiangyong
    Yang, Yuexiang
    He, Liangzhong
    2023 42ND INTERNATIONAL SYMPOSIUM ON RELIABLE DISTRIBUTED SYSTEMS, SRDS 2023, 2023, : 25 - 34
  • [9] FedEqual: Defending Model Poisoning Attacks in Heterogeneous Federated Learning
    Chen, Ling-Yuan
    Chiu, Te-Chuan
    Pang, Ai-Chun
    Cheng, Li-Chen
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [10] DeFL: Defending against Model Poisoning Attacks in Federated Learning via Critical Learning Periods Awareness
    Yan, Gang
    Wang, Hao
    Yuan, Xu
    Li, Jian
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 9, 2023, : 10711 - 10719