Investigating the Factors Impacting Adversarial Attack and Defense Performances in Federated Learning

被引:4
|
作者
Aljaafari, Nura [1 ]
Nazzal, Mahmoud [2 ]
Sawalmeh, Ahmad H.
Khreishah, Abdallah [2 ]
Anan, Muhammad [3 ]
Algosaibi, Abdulelah [1 ]
Alnaeem, Mohammed Abdulaziz [4 ]
Aldalbahi, Adel [1 ]
Alhumam, Abdulaziz [1 ]
Vizcarra, Conrado P. [1 ]
机构
[1] King Faisal Univ, Dept Comp Sci, Al Hufuf 31982, Saudi Arabia
[2] New Jersey Inst Technol, Dept Elect & Comp Engn, Newark, NJ 07102 USA
[3] Alfaisal Univ, Dept Software Engn, Riyadh 11533, Saudi Arabia
[4] King Faisal Univ, Dept Comp Networks & Commun, Al Hufuf 31982, Saudi Arabia
关键词
Training; Data models; Task analysis; Analytical models; Servers; Computational modeling; Complexity theory; Adversarial attacks; adversarial defense; federated learning; machine learning security; PRIVACY;
D O I
10.1109/TEM.2022.3155353
中图分类号
F [经济];
学科分类号
02 ;
摘要
Despite the promising success of federated learning in various application areas, its inherent vulnerability to adversarial attacks hinders its applicability in security-critical areas. This calls for developing viable defense measures against such attacks. A prerequisite for this development, however, is the understanding of what creates, promotes, and aggravates this vulnerability. To date, developing this understanding remains an outstanding gap in the literature. Accordingly, this paper presents an attempt at developing such an understanding. Primarily, this is achieved from two main perspectives. The first perspective concerns addressing the factors, elements, and parameters contributing to the vulnerability of federated learning models to adversarial attacks, their degrees of severity, and combined effects. This includes addressing diverse operating conditions, attack types and scenarios, and collaborations between attacking agents. The second perspective regards analyzing the appearance of the adversarial property of a model in how it updates its coefficients and exploiting this for defense purposes. These analyses are conducted through extensive experiments on image and text classification tasks. Simulation results reveal the importance of specific parameters and factors on the severity of this vulnerability. Besides, the proposed defense strategy is shown able to provide promising performances.
引用
收藏
页码:12542 / 12555
页数:14
相关论文
共 50 条
  • [1] ADFL: A Poisoning Attack Defense Framework for Horizontal Federated Learning
    Guo, Jingjing
    Li, Haiyang
    Huang, Feiran
    Liu, Zhiquan
    Peng, Yanguo
    Li, Xinghua
    Ma, Jianfeng
    Menon, Varun G.
    Igorevich, Konstantin Kostromitin
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (10) : 6526 - 6536
  • [2] Practical Attribute Reconstruction Attack Against Federated Learning
    Chen, Chen
    Lyu, Lingjuan
    Yu, Han
    Chen, Gang
    IEEE TRANSACTIONS ON BIG DATA, 2024, 10 (06) : 851 - 863
  • [3] Untargeted Poisoning Attack Detection in Federated Learning via Behavior AttestationAl
    Mallah, Ranwa Al
    Lopez, David
    Badu-Marfo, Godwin
    Farooq, Bilal
    IEEE ACCESS, 2023, 11 : 125064 - 125079
  • [4] Analyzing User-Level Privacy Attack Against Federated Learning
    Song, Mengkai
    Wang, Zhibo
    Zhang, Zhifei
    Song, Yang
    Wang, Qian
    Ren, Ju
    Qi, Hairong
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2020, 38 (10) : 2430 - 2444
  • [5] Privacy-Enhanced Federated GNN Inference Against Adversarial Example Attack
    He, Guanghui
    Ren, Yanli
    Jiang, Jingyuan
    Feng, Guorui
    Zhang, Xinpeng
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024,
  • [6] Adaptive Selection of Loss Function for Federated Learning Clients Under Adversarial Attacks
    Lee, Suchul
    IEEE ACCESS, 2024, 12 : 96051 - 96062
  • [7] A Meta-Reinforcement Learning-Based Poisoning Attack Framework Against Federated Learning
    Zhou, Wei
    Zhang, Donglai
    Wang, Hongjie
    Li, Jinliang
    Jiang, Mingjian
    IEEE ACCESS, 2025, 13 : 28628 - 28644
  • [8] OQFL: An Optimized Quantum-Based Federated Learning Framework for Defending Against Adversarial Attacks in Intelligent Transportation Systems
    Yamany, Waleed
    Moustafa, Nour
    Turnbull, Benjamin
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (01) : 893 - 903
  • [9] MODEL: A Model Poisoning Defense Framework for Federated Learning via Truth Discovery
    Wu, Minzhe
    Zhao, Bowen
    Xiao, Yang
    Deng, Congjian
    Liu, Yuan
    Liu, Ximeng
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 8747 - 8759
  • [10] Poisoning Attack in Federated Learning using Generative Adversarial Nets
    Zhang, Jiale
    Chen, Junjun
    Wu, Di
    Chen, Bing
    Yu, Shui
    2019 18TH IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS/13TH IEEE INTERNATIONAL CONFERENCE ON BIG DATA SCIENCE AND ENGINEERING (TRUSTCOM/BIGDATASE 2019), 2019, : 374 - 380