Towards Practical Backdoor Attacks on Federated Learning Systems

被引:1
作者
Shi, Chenghui [1 ]
Ji, Shouling [1 ]
Pan, Xudong [2 ]
Zhang, Xuhong [1 ]
Zhang, Mi [2 ]
Yang, Min [2 ]
Zhou, Jun [3 ]
Yin, Jianwei [1 ]
Wang, Ting [4 ]
机构
[1] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310027, Peoples R China
[2] Fudan Univ, Sch Comp Sci & Technol, Shanghai 200433, Peoples R China
[3] Ant Grp, Hangzhou 310000, Peoples R China
[4] SUNY Stony Brook, Dept Comp Sci, Stony Brook, NY 11794 USA
关键词
Neurons; Computational modeling; Training; Task analysis; Data models; Servers; Face recognition; Federated learning; backdoor attack; deep neural networks;
D O I
10.1109/TDSC.2024.3376790
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) is nowadays one of the most promising paradigms for privacy-preserving distributed learning. Without revealing its local private data to outsiders, a client in FL systems collaborates to build a global Deep Neural Network (DNN) by submitting its local model parameter update to a central server for iterative aggregation. With secure multi-party computation protocols, the submitted update of any client is also by design invisible to the server. Seemingly, this standard design is a win-win for client privacy and service provider utility. Ironically, any attacker may also use manipulated or impersonated client to submit almost any attack payloads under the umbrella of the FL protocol itself. In this work, we craft a practical backdoor attack on FL systems that is proved to be simultaneously effective and stealthy on diverse use cases of FL systems and leading commercial FL platforms in the real world. Basically, we first identify a small number of redundant neurons which tend to be rarely or slightly updated in the model, and then inject backdoor into these redundant neurons instead of the whole model. In this way, our backdoor attack can achieve a high attack success rate with a minor impact on the accuracy of the original task. As countermeasures, we further consider several common technical choices including robust aggregation mechanisms, differential privacy mechanism,s and network pruning. However, none of the defenses show desirable defense capability against our backdoor attack. Our results strongly highlight the vulnerability of existing FL systems against backdoor attacks and the urgent need to develop more effective defense mechanisms.
引用
收藏
页码:5431 / 5447
页数:17
相关论文
共 50 条
  • [31] Facilitating Early-Stage Backdoor Attacks in Federated Learning With Whole Population Distribution Inference
    Liu, Tian
    Hu, Xueyang
    Shu, Tao
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (12) : 10385 - 10399
  • [32] FedPD: Defending federated prototype learning against backdoor attacks
    Tan, Zhou
    Cai, Jianping
    Li, De
    Lian, Puwei
    Liu, Ximeng
    Che, Yan
    NEURAL NETWORKS, 2025, 184
  • [33] Mitigating the Backdoor Attack by Federated Filters for Industrial IoT Applications
    Hou, Boyu
    Gao, Jiqiang
    Guo, Xiaojie
    Baker, Thar
    Zhang, Ying
    Wen, Yanlong
    Liu, Zheli
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (05) : 3562 - 3571
  • [34] Adaptive Backdoor Attacks Against Dataset Distillation for Federated Learning
    Chai, Ze
    Gao, Zhipeng
    Lin, Yijing
    Zhao, Chen
    Yu, Xinlei
    Xie, Zhiqiang
    ICC 2024 - IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2024, : 4614 - 4619
  • [35] Backdoor Federated Learning-Based mmWave Beam Selection
    Zhang, Zhengming
    Yang, Ruming
    Zhang, Xiangyu
    Li, Chunguo
    Huang, Yongming
    Yang, Luxi
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2022, 70 (10) : 6563 - 6578
  • [36] RobustFL: Robust Federated Learning Against Poisoning Attacks in Industrial IoT Systems
    Zhang, Jiale
    Ge, Chunpeng
    Hu, Feng
    Chen, Bing
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (09) : 6388 - 6397
  • [37] FLCert: Provably Secure Federated Learning Against Poisoning Attacks
    Cao, Xiaoyu
    Zhang, Zaixi
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 3691 - 3705
  • [38] FedMC: Federated Learning with Mode Connectivity Against Distributed Backdoor Attacks
    Wang, Weiqi
    Zhang, Chenhan
    Liu, Shushu
    Tang, Mingjian
    Liu, An
    Yu, Shui
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 4873 - 4878
  • [39] Towards Fairness-Aware Federated Learning
    Shi, Yuxin
    Yu, Han
    Leung, Cyril
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (09) : 11922 - 11938
  • [40] A Blockchain-Based Federated-Learning Framework for Defense against Backdoor Attacks
    Li, Lu
    Qin, Jiwei
    Luo, Jintao
    ELECTRONICS, 2023, 12 (11)