Collusive Backdoor Attacks in Federated Learning Frameworks for IoT Systems

被引:6
作者
Alharbi, Saier [1 ]
Guo, Yifan [1 ]
Yu, Wei [1 ]
机构
[1] Towson Univ, Dept Comp & Informat Sci, Towson, MD 21252 USA
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 11期
关键词
Perturbation methods; Vectors; Estimation; Training; Internet of Things; Data models; Federated learning; Backdoor attacks; collusion; deep learning (DL); federated learning (FL); Internet of Things (IoT); INTERNET; THINGS;
D O I
10.1109/JIOT.2024.3368754
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Internet of Things (IoT) devices generate massive amounts of data from local devices, making federated learning (FL) a viable distributed machine learning paradigm to learn a global model while keeping private data locally in various IoT systems. However, recent studies show that FL's decentralized nature makes it susceptible to backdoor attacks. Existing defenses like robust aggregation defenses have reduced attack success rates (ASRs) by identifying significant statistical differences between normal and backdoored models individually. However, these defenses fail to consider the potential collusion among attackers to bypass statistical measures utilized in defenses. In this article, we propose a novel attack approach, called collusive backdoor attacks (CBAs), which bypasses robust aggregation defense by considering both local backdoor training and post-training model manipulations among collusive attackers. Particularly, we introduce a nontrivial perturbation estimation scheme to add manipulations over model update vectors after local backdoor training and use the Gram-Schmidt process to speed up the estimation process. This makes the magnitude of the perturbed poisoned model to the same level as normal models, evading robust aggregation-based defense while maintaining attack efficacy. After that, we provide a pilot study to verify the feasibility of our perturbation estimation scheme, followed by its convergence analysis. By evaluating the attack performance on four representative data sets, our CBA approach maintains high ASRs under benchmark robust aggregation defenses in both independent and identically distributed (IID) and non-IID local data settings. Particularly, it increases the ASR by 126% on average compared to individual backdoor attacks.
引用
收藏
页码:19694 / 19707
页数:14
相关论文
共 43 条
  • [1] Convex Optimization: Algorithms and Complexity
    不详
    [J]. FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2015, 8 (3-4): : 232 - +
  • [2] Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
  • [3] Barni M, 2019, IEEE IMAGE PROC, P101, DOI [10.1109/ICIP.2019.8802997, 10.1109/icip.2019.8802997]
  • [4] Brendan McMahan H., 2016, arXiv, DOI 10.48550/ARXIV.1602.05629
  • [5] Caldas Sebastian, 2018, arXiv
  • [6] Chen BY, 2018, Arxiv, DOI [arXiv:1811.03728, DOI 10.48550/ARXIV:1811.03728]
  • [7] Mitigating backdoor attacks in LSTM-based text classification systems by Backdoor Keyword Identification
    Chen, Chuanshuai
    Dai, Jiazhu
    [J]. NEUROCOMPUTING, 2021, 452 : 253 - 262
  • [8] DeepGuard: Backdoor Attack Detection and Identification Schemes in Privacy-Preserving Deep Neural Networks
    Chen, Congcong
    Wei, Lifei
    Zhang, Lei
    Peng, Ya
    Ning, Jianting
    [J]. SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
  • [9] F-Cooper: Feature based Cooperative Perception for Autonomous Vehicle Edge Computing System Using 3D Point Clouds
    Chen, Qi
    Ma, Xu
    Tang, Sihai
    Guo, Jingda
    Yang, Qing
    Fu, Song
    [J]. SEC'19: PROCEEDINGS OF THE 4TH ACM/IEEE SYMPOSIUM ON EDGE COMPUTING, 2019, : 88 - 100
  • [10] Zero Knowledge Clustering Based Adversarial Mitigation in Heterogeneous Federated Learning
    Chen, Zheyi
    Tian, Pu
    Liao, Weixian
    Yu, Wei
    [J]. IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2021, 8 (02): : 1070 - 1083