Securing the collective intelligence: a comprehensive review of federated learning security attacks and defensive strategies

被引:0
|
作者
Kaushal, Vishal [1 ]
Sharma, Sangeeta [1 ]
机构
[1] Natl Inst Technol, Comp Sci & Engn Dept, Hamirpur 177005, Himachal Prades, India
关键词
Centralized learning; Federated learning; Threats; Defense; Aggregation algorithm; POISONING ATTACKS; PRIVACY; CHALLENGES;
D O I
10.1007/s10115-025-02339-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning holds significant potential as a collaborative machine learning technique, allowing multiple entities to work together on a collective model without the need to exchange data. However, due to the distribution of data across multiple devices, federated learning becomes susceptible to a range of attacks. This paper provides an extensive examination of the different forms of attacks that can target federated learning systems. The attacks discussed include data poisoning attacks, model poisoning attacks, backdoor attacks, Byzantine attacks, membership inference attacks, model inversion attacks, etc. Each attack is examined in detail, with examples from the literature provided. Additionally, potential countermeasures to defend against these attacks are explored. The objective of this review is to provide an in-depth survey of the current landscape in federated learning attacks and corresponding defense mechanisms.
引用
收藏
页码:3099 / 3137
页数:39
相关论文
共 50 条
  • [21] Evaluating Security and Robustness for Split Federated Learning Against Poisoning Attacks
    Wu, Xiaodong
    Yuan, Henry
    Li, Xiangman
    Ni, Jianbing
    Lu, Rongxing
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 175 - 190
  • [22] Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey
    Wan, Yichen
    Qu, Youyang
    Ni, Wei
    Xiang, Yong
    Gao, Longxiang
    Hossain, Ekram
    IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2024, 26 (03): : 1861 - 1897
  • [23] Review on application progress of federated learning model and security hazard protection
    Yang, Aimin
    Ma, Zezhong
    Zhang, Chunying
    Han, Yang
    Hu, Zhibin
    Zhang, Wei
    Huang, Xiangdong
    Wu, Yafeng
    DIGITAL COMMUNICATIONS AND NETWORKS, 2023, 9 (01) : 146 - 158
  • [24] Advancements in securing federated learning with IDS: a comprehensive review of neural networks and feature engineering techniques for malicious client detection
    Latif, Naila
    Ma, Wenping
    Ahmad, Hafiz Bilal
    ARTIFICIAL INTELLIGENCE REVIEW, 2025, 58 (03)
  • [25] Review of Deep Gradient Inversion Attacks and Defenses in Federated Learning
    Sun Y.
    Yan Y.
    Cui J.
    Xiong G.
    Liu J.
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2024, 46 (02): : 428 - 442
  • [26] Security of federated learning for cloud-edge intelligence collaborative computing
    Yang, Jie
    Zheng, Jun
    Zhang, Zheng
    Chen, Q., I
    Wong, Duncan S.
    Li, Yuanzhang
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (11) : 9290 - 9308
  • [27] Federated learning: a comprehensive review of recent advances and applications
    Kaur, Harmandeep
    Rani, Veenu
    Kumar, Munish
    Sachdeva, Monika
    Mittal, Ajay
    Kumar, Krishan
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (18) : 54165 - 54188
  • [28] A novel aggregation method to promote safety security for poisoning attacks in Federated Learning
    Barros, Pedro H.
    Ramos, Heitor S.
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 3869 - 3874
  • [29] Federated learning: a comprehensive review of recent advances and applications
    Harmandeep Kaur
    Veenu Rani
    Munish Kumar
    Monika Sachdeva
    Ajay Mittal
    Krishan Kumar
    Multimedia Tools and Applications, 2024, 83 : 54165 - 54188
  • [30] Integrating Explainable AI with Federated Learning for Next-Generation IoT: A comprehensive review and prospective insights
    Dubey, Praveer
    Kumar, Mohit
    COMPUTER SCIENCE REVIEW, 2025, 56