DPAD: Data Poisoning Attack Defense Mechanism for federated learning-based system

被引:0
作者
Basak, Santanu [1 ]
Chatterjee, Kakali [1 ]
机构
[1] Natl Inst Technol Patna, Dept Comp Sci & Engn, Patna 800005, Bihar, India
关键词
Data Poisoning Attack; Data Poisoning Attack Defense; Federated learning; Machine learning; Machine learning attack; Secure aggregation process;
D O I
10.1016/j.compeleceng.2024.109893
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The Federated Learning (FL)-based approaches are increasing rapidly for different areas, such as home automation, smart healthcare, smart cars, etc. In FL, multiple users participate collaboratively and distributively to construct a global model without sharing raw data. The FL- based system resolves several issues of central server-based machine learning approaches, such as data availability, maintaining user privacy, etc. Still, some issues exist, such as data poisoning attacks and re-identification attacks. This paper proposes a Data Poisoning Attack Defense (DPAD) Mechanism that detects and defends against the data poisoning attack efficiently and secures the aggregation process for the Federated Learning-based systems. The DPAD verifies each client's updates using an audit mechanism that decides whether a local update is considered for aggregation. The experimental results show the effectiveness of the attack and the power of the DPAD mechanism compared with the state-of-the-art methods.
引用
收藏
页数:15
相关论文
共 42 条
  • [21] Lalitha A, 2019, Arxiv, DOI arXiv:1901.11173
  • [22] LoMar: A Local Defense Against Poisoning Attack on Federated Learning
    Li, Xingyu
    Qu, Zhe
    Zhao, Shangqing
    Tang, Bo
    Lu, Zhuo
    Liu, Yao
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (01) : 437 - 450
  • [23] Privacy-Preserving and Poisoning-Defending Federated Learning in Fog Computing
    Li, Yiran
    Zhang, Shibin
    Chang, Yan
    Xu, Guowen
    Li, Hongwei
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (03): : 5063 - 5077
  • [24] Lian XR, 2017, Arxiv, DOI arXiv:1705.09056
  • [25] Threats, attacks and defenses to federated learning: issues, taxonomy and perspectives
    Liu, Pengrui
    Xu, Xiangrui
    Wang, Wei
    [J]. CYBERSECURITY, 2022, 5 (01)
  • [26] Deep Learning Face Attributes in the Wild
    Liu, Ziwei
    Luo, Ping
    Wang, Xiaogang
    Tang, Xiaoou
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 3730 - 3738
  • [27] McMahan HB, 2017, PR MACH LEARN RES, V54, P1273
  • [28] Nelson B, 2012, P 29 INT C MACH LEAR, V13
  • [29] Dynamic Backdoor Attacks Against Machine Learning Models
    Salem, Ahmed
    Wen, Rui
    Backes, Michael
    Ma, Shiqing
    Zhang, Yang
    [J]. 2022 IEEE 7TH EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY (EUROS&P 2022), 2022, : 703 - 718
  • [30] Privacy-Preserving Deep Learning
    Shokri, Reza
    Shmatikov, Vitaly
    [J]. CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, : 1310 - 1321