Towards Practical Backdoor Attacks on Federated Learning Systems

被引:1
作者
Shi, Chenghui [1 ]
Ji, Shouling [1 ]
Pan, Xudong [2 ]
Zhang, Xuhong [1 ]
Zhang, Mi [2 ]
Yang, Min [2 ]
Zhou, Jun [3 ]
Yin, Jianwei [1 ]
Wang, Ting [4 ]
机构
[1] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310027, Peoples R China
[2] Fudan Univ, Sch Comp Sci & Technol, Shanghai 200433, Peoples R China
[3] Ant Grp, Hangzhou 310000, Peoples R China
[4] SUNY Stony Brook, Dept Comp Sci, Stony Brook, NY 11794 USA
关键词
Neurons; Computational modeling; Training; Task analysis; Data models; Servers; Face recognition; Federated learning; backdoor attack; deep neural networks;
D O I
10.1109/TDSC.2024.3376790
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) is nowadays one of the most promising paradigms for privacy-preserving distributed learning. Without revealing its local private data to outsiders, a client in FL systems collaborates to build a global Deep Neural Network (DNN) by submitting its local model parameter update to a central server for iterative aggregation. With secure multi-party computation protocols, the submitted update of any client is also by design invisible to the server. Seemingly, this standard design is a win-win for client privacy and service provider utility. Ironically, any attacker may also use manipulated or impersonated client to submit almost any attack payloads under the umbrella of the FL protocol itself. In this work, we craft a practical backdoor attack on FL systems that is proved to be simultaneously effective and stealthy on diverse use cases of FL systems and leading commercial FL platforms in the real world. Basically, we first identify a small number of redundant neurons which tend to be rarely or slightly updated in the model, and then inject backdoor into these redundant neurons instead of the whole model. In this way, our backdoor attack can achieve a high attack success rate with a minor impact on the accuracy of the original task. As countermeasures, we further consider several common technical choices including robust aggregation mechanisms, differential privacy mechanism,s and network pruning. However, none of the defenses show desirable defense capability against our backdoor attack. Our results strongly highlight the vulnerability of existing FL systems against backdoor attacks and the urgent need to develop more effective defense mechanisms.
引用
收藏
页码:5431 / 5447
页数:17
相关论文
共 50 条
  • [41] Defending against Poisoning Backdoor Attacks on Federated Meta-learning
    Chen, Chien-Lun
    Babakniya, Sara
    Paolieri, Marco
    Golubchik, Leana
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (05)
  • [42] ICT: Invisible Computable Trigger Backdoor Attacks in Transfer Learning
    Chen, Xiang
    Liu, Bo
    Zhao, Shaofeng
    Liu, Ming
    Xu, Hui
    Li, Zhanbo
    Zheng, Zhigao
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (04) : 6747 - 6758
  • [43] FLSAD: Defending Backdoor Attacks in Federated Learning via Self-Attention Distillation
    Chen, Lucheng
    Liu, Xiaoshuang
    Wang, Ailing
    Zhai, Weiwei
    Cheng, Xiang
    SYMMETRY-BASEL, 2024, 16 (11):
  • [44] FedMUA: Exploring the Vulnerabilities of Federated Learning to Malicious Unlearning Attacks
    Chen, Jian
    Lin, Zehui
    Lin, Wanyu
    Shi, Wenlong
    Yin, Xiaoyan
    Wang, Di
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 1665 - 1678
  • [45] Backdoor Attack to Giant Model in Fragment-Sharing Federated Learning
    Qi, Senmao
    Ma, Hao
    Zou, Yifei
    Yuan, Yuan
    Xie, Zhenzhen
    Li, Peng
    Cheng, Xiuzhen
    BIG DATA MINING AND ANALYTICS, 2024, 7 (04): : 1084 - 1097
  • [46] Practical Attribute Reconstruction Attack Against Federated Learning
    Chen, Chen
    Lyu, Lingjuan
    Yu, Han
    Chen, Gang
    IEEE TRANSACTIONS ON BIG DATA, 2024, 10 (06) : 851 - 863
  • [47] Never Too Late: Tracing and Mitigating Backdoor Attacks in Federated Learning
    Zeng, Hui
    Zhou, Tongqing
    Wu, Xinyi
    Cai, Zhiping
    2022 41ST INTERNATIONAL SYMPOSIUM ON RELIABLE DISTRIBUTED SYSTEMS (SRDS 2022), 2022, : 69 - 81
  • [48] BadCleaner: Defending Backdoor Attacks in Federated Learning via Attention-Based Multi-Teacher Distillation
    Zhang, Jiale
    Zhu, Chengcheng
    Ge, Chunpeng
    Ma, Chuan
    Zhao, Yanchao
    Sun, Xiaobing
    Chen, Bing
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (05) : 4559 - 4573
  • [49] SecFFT: Safeguarding Federated Fine-Tuning for Large Vision Language Models Against Covert Backdoor Attacks in IoRT Networks
    Zhou, Zan
    Xu, Changqiao
    Wang, Bo
    Li, Tengfei
    Huang, Sizhe
    Yang, Shujie
    Yao, Su
    IEEE INTERNET OF THINGS JOURNAL, 2025, 12 (09): : 11383 - 11396
  • [50] Source Inference Attacks: Beyond Membership Inference Attacks in Federated Learning
    Hu, Hongsheng
    Zhang, Xuyun
    Salcic, Zoran
    Sun, Lichao
    Choo, Kim-Kwang Raymond
    Dobbie, Gillian
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 3012 - 3029