Collusive Backdoor Attacks in Federated Learning Frameworks for IoT Systems

被引:11
作者
Alharbi, Saier [1 ]
Guo, Yifan [1 ]
Yu, Wei [1 ]
机构
[1] Towson Univ, Dept Comp & Informat Sci, Towson, MD 21252 USA
关键词
Perturbation methods; Vectors; Estimation; Training; Internet of Things; Data models; Federated learning; Backdoor attacks; collusion; deep learning (DL); federated learning (FL); Internet of Things (IoT); INTERNET; THINGS;
D O I
10.1109/JIOT.2024.3368754
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Internet of Things (IoT) devices generate massive amounts of data from local devices, making federated learning (FL) a viable distributed machine learning paradigm to learn a global model while keeping private data locally in various IoT systems. However, recent studies show that FL's decentralized nature makes it susceptible to backdoor attacks. Existing defenses like robust aggregation defenses have reduced attack success rates (ASRs) by identifying significant statistical differences between normal and backdoored models individually. However, these defenses fail to consider the potential collusion among attackers to bypass statistical measures utilized in defenses. In this article, we propose a novel attack approach, called collusive backdoor attacks (CBAs), which bypasses robust aggregation defense by considering both local backdoor training and post-training model manipulations among collusive attackers. Particularly, we introduce a nontrivial perturbation estimation scheme to add manipulations over model update vectors after local backdoor training and use the Gram-Schmidt process to speed up the estimation process. This makes the magnitude of the perturbed poisoned model to the same level as normal models, evading robust aggregation-based defense while maintaining attack efficacy. After that, we provide a pilot study to verify the feasibility of our perturbation estimation scheme, followed by its convergence analysis. By evaluating the attack performance on four representative data sets, our CBA approach maintains high ASRs under benchmark robust aggregation defenses in both independent and identically distributed (IID) and non-IID local data settings. Particularly, it increases the ASR by 126% on average compared to individual backdoor attacks.
引用
收藏
页码:19694 / 19707
页数:14
相关论文
共 43 条
[21]   Machine Learning for Security and the Internet of Things: The Good, the Bad, and the Ugly [J].
Liang, Fan ;
Hatcher, William ;
Liao, Weixian ;
Gao, Weichao ;
Yu, Wei .
IEEE ACCESS, 2019, 7 :158126-158147
[22]   Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks [J].
Liu, Kang ;
Dolan-Gavitt, Brendan ;
Garg, Siddharth .
RESEARCH IN ATTACKS, INTRUSIONS, AND DEFENSES, RAID 2018, 2018, 11050 :273-294
[23]   Secure Internet of Things (IoT)-Based Smart-World Critical Infrastructures: Survey, Case Study and Research Opportunities [J].
Liu, Xing ;
Qian, Cheng ;
Hatcher, William Grant ;
Xu, Hansong ;
Liao, Weixian ;
Yu, Wei .
IEEE ACCESS, 2019, 7 :79523-79544
[24]   Trojaning Attack on Neural Networks [J].
Liu, Yingqi ;
Ma, Shiqing ;
Aafer, Yousra ;
Lee, Wen-Chuan ;
Zhai, Juan ;
Wang, Weihang ;
Zhang, Xiangyu .
25TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2018), 2018,
[25]  
Lyu X., 2023, P 37 AAAI C ART INT
[26]  
McMahan H. B., 2016, arXiv
[27]  
McMahan HB, 2017, PR MACH LEARN RES, V54, P1273
[28]   Deep Learning for IoT Big Data and Streaming Analytics: A Survey [J].
Mohammadi, Mehdi ;
Al-Fuqaha, Ala ;
Sorour, Sameh ;
Guizani, Mohsen .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2018, 20 (04) :2923-2960
[29]   Data Poisoning in Sequential and Parallel Federated Learning* [J].
Nuding, Florian ;
Mayer, Rudolf .
PROCEEDINGS OF THE 2022 ACM INTERNATIONAL WORKSHOP ON SECURITY AND PRIVACY ANALYTICS (IWSPA '22), 2022, :24-34
[30]  
Pillutla K, 2022, Arxiv, DOI arXiv:1912.13445