Investigating the Factors Impacting Adversarial Attack and Defense Performances in Federated Learning

被引:4
作者
Aljaafari, Nura [1 ]
Nazzal, Mahmoud [2 ]
Sawalmeh, Ahmad H.
Khreishah, Abdallah [2 ]
Anan, Muhammad [3 ]
Algosaibi, Abdulelah [1 ]
Alnaeem, Mohammed Abdulaziz [4 ]
Aldalbahi, Adel [1 ]
Alhumam, Abdulaziz [1 ]
Vizcarra, Conrado P. [1 ]
机构
[1] King Faisal Univ, Dept Comp Sci, Al Hufuf 31982, Saudi Arabia
[2] New Jersey Inst Technol, Dept Elect & Comp Engn, Newark, NJ 07102 USA
[3] Alfaisal Univ, Dept Software Engn, Riyadh 11533, Saudi Arabia
[4] King Faisal Univ, Dept Comp Networks & Commun, Al Hufuf 31982, Saudi Arabia
关键词
Training; Data models; Task analysis; Analytical models; Servers; Computational modeling; Complexity theory; Adversarial attacks; adversarial defense; federated learning; machine learning security; PRIVACY;
D O I
10.1109/TEM.2022.3155353
中图分类号
F [经济];
学科分类号
02 ;
摘要
Despite the promising success of federated learning in various application areas, its inherent vulnerability to adversarial attacks hinders its applicability in security-critical areas. This calls for developing viable defense measures against such attacks. A prerequisite for this development, however, is the understanding of what creates, promotes, and aggravates this vulnerability. To date, developing this understanding remains an outstanding gap in the literature. Accordingly, this paper presents an attempt at developing such an understanding. Primarily, this is achieved from two main perspectives. The first perspective concerns addressing the factors, elements, and parameters contributing to the vulnerability of federated learning models to adversarial attacks, their degrees of severity, and combined effects. This includes addressing diverse operating conditions, attack types and scenarios, and collaborations between attacking agents. The second perspective regards analyzing the appearance of the adversarial property of a model in how it updates its coefficients and exploiting this for defense purposes. These analyses are conducted through extensive experiments on image and text classification tasks. Simulation results reveal the importance of specific parameters and factors on the severity of this vulnerability. Besides, the proposed defense strategy is shown able to provide promising performances.
引用
收藏
页码:12542 / 12555
页数:14
相关论文
共 50 条
[41]   Shielding Federated Learning: A New Attack Approach and Its Defense [J].
Wan, Wei ;
Lu, Jianrong ;
Hu, Shengshan ;
Zhang, Leo Yu ;
Pei, Xiaobing .
2021 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2021,
[42]   Shadow defense against gradient inversion attack in federated learning☆ [J].
Jiang, Le ;
Ma, Liyan ;
Yang, Guang .
MEDICAL IMAGE ANALYSIS, 2025, 105
[43]   FedDAA: a robust federated learning framework to protect privacy and defend against adversarial attack [J].
Lu, Shiwei ;
Li, Ruihu ;
Liu, Wenbin .
FRONTIERS OF COMPUTER SCIENCE, 2024, 18 (02)
[44]   Evil vs evil: using adversarial examples to against backdoor attack in federated learning [J].
Tao Liu ;
Mingjun Li ;
Haibin Zheng ;
Zhaoyan Ming ;
Jinyin Chen .
Multimedia Systems, 2023, 29 :553-568
[45]   Leveraging Multiple Adversarial Perturbation Distances for Enhanced Membership Inference Attack in Federated Learning [J].
Xia, Fan ;
Liu, Yuhao ;
Jin, Bo ;
Yu, Zheng ;
Cai, Xingwei ;
Li, Hao ;
Zha, Zhiyong ;
Hou, Dai ;
Peng, Kai .
SYMMETRY-BASEL, 2024, 16 (12)
[46]   FLAIR: Defense against Model Poisoning Attack in Federated Learning [J].
Sharma, Atul ;
Chen, Wei ;
Zhao, Joshua ;
Qiu, Qiang ;
Bagchi, Saurabh ;
Chaterji, Somali .
PROCEEDINGS OF THE 2023 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, ASIA CCS 2023, 2023, :553-+
[47]   Survey of Backdoor Attack and Defense Algorithms Based on Federated Learning [J].
Liu, Jialang ;
Guo, Yanming ;
Lao, Mingrui ;
Yu, Tianyuan ;
Wu, Yulun ;
Feng, Yunhao ;
Wu, Jiazhuang .
Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2024, 61 (10) :2607-2626
[48]   FedDAA: a robust federated learning framework to protect privacy and defend against adversarial attack [J].
Shiwei Lu ;
Ruihu Li ;
Wenbin Liu .
Frontiers of Computer Science, 2024, 18
[49]   Adversarial Attack and Defense for LoRa Device Identification and Authentication via Deep Learning [J].
Sagduyu, Yalin E. ;
Erpek, Tugba .
IEEE INTERNET OF THINGS JOURNAL, 2025, 12 (12) :20261-20271
[50]   Attack as Defense: Proactive Adversarial Multi-Modal Learning to Evade Retrieval [J].
Li, Fengling ;
Wang, Tianshi ;
Zhu, Lei ;
Li, Jingjing ;
Shen, Heng Tao .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2025, 47 (06) :4717-4733