Towards Practical Backdoor Attacks on Federated Learning Systems

被引:1
|
作者
Shi, Chenghui [1 ]
Ji, Shouling [1 ]
Pan, Xudong [2 ]
Zhang, Xuhong [1 ]
Zhang, Mi [2 ]
Yang, Min [2 ]
Zhou, Jun [3 ]
Yin, Jianwei [1 ]
Wang, Ting [4 ]
机构
[1] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310027, Peoples R China
[2] Fudan Univ, Sch Comp Sci & Technol, Shanghai 200433, Peoples R China
[3] Ant Grp, Hangzhou 310000, Peoples R China
[4] SUNY Stony Brook, Dept Comp Sci, Stony Brook, NY 11794 USA
关键词
Neurons; Computational modeling; Training; Task analysis; Data models; Servers; Face recognition; Federated learning; backdoor attack; deep neural networks;
D O I
10.1109/TDSC.2024.3376790
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) is nowadays one of the most promising paradigms for privacy-preserving distributed learning. Without revealing its local private data to outsiders, a client in FL systems collaborates to build a global Deep Neural Network (DNN) by submitting its local model parameter update to a central server for iterative aggregation. With secure multi-party computation protocols, the submitted update of any client is also by design invisible to the server. Seemingly, this standard design is a win-win for client privacy and service provider utility. Ironically, any attacker may also use manipulated or impersonated client to submit almost any attack payloads under the umbrella of the FL protocol itself. In this work, we craft a practical backdoor attack on FL systems that is proved to be simultaneously effective and stealthy on diverse use cases of FL systems and leading commercial FL platforms in the real world. Basically, we first identify a small number of redundant neurons which tend to be rarely or slightly updated in the model, and then inject backdoor into these redundant neurons instead of the whole model. In this way, our backdoor attack can achieve a high attack success rate with a minor impact on the accuracy of the original task. As countermeasures, we further consider several common technical choices including robust aggregation mechanisms, differential privacy mechanism,s and network pruning. However, none of the defenses show desirable defense capability against our backdoor attack. Our results strongly highlight the vulnerability of existing FL systems against backdoor attacks and the urgent need to develop more effective defense mechanisms.
引用
收藏
页码:5431 / 5447
页数:17
相关论文
共 50 条
  • [21] Backdoor Attacks to Deep Learning Models and Countermeasures: A Survey
    Li, Yudong
    Zhang, Shigeng
    Wang, Weiping
    Song, Hong
    IEEE OPEN JOURNAL OF THE COMPUTER SOCIETY, 2023, 4 : 134 - 146
  • [22] SBPA: Sybil-Based Backdoor Poisoning Attacks for Distributed Big Data in AIoT-Based Federated Learning System
    Xiao, Xiong
    Tang, Zhuo
    Li, Chuanying
    Jiang, Bingting
    Li, Kenli
    IEEE TRANSACTIONS ON BIG DATA, 2024, 10 (06) : 827 - 838
  • [23] An Investigation of Recent Backdoor Attacks and Defenses in Federated Learning
    Chen, Qiuxian
    Tao, Yizheng
    2023 EIGHTH INTERNATIONAL CONFERENCE ON FOG AND MOBILE EDGE COMPUTING, FMEC, 2023, : 262 - 269
  • [24] MITDBA: Mitigating Dynamic Backdoor Attacks in Federated Learning for IoT Applications
    Wang, Yongkang
    Zhai, Di-Hua
    Han, Dongyu
    Guan, Yuyin
    Xia, Yuanqing
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (06): : 10115 - 10132
  • [25] Backdoor Attacks and Defenses in Federated Learning: State-of-the-Art, Taxonomy, and Future Directions
    Gong, Xueluan
    Chen, Yanjiao
    Wang, Qian
    Kong, Weihan
    IEEE WIRELESS COMMUNICATIONS, 2023, 30 (02) : 114 - 121
  • [26] Backdoor Attacks in Peer-to-Peer Federated Learning
    Syros, Georgios
    Yar, Gokberk
    Boboila, Simona
    Nita-Rotaru, Cristina
    Oprea, Alina
    ACM TRANSACTIONS ON PRIVACY AND SECURITY, 2025, 28 (01)
  • [27] SCFL: Mitigating backdoor attacks in federated learning based on SVD and clustering 
    Wang, Yongkang
    Zhai, Di-Hua
    Xia, Yuanqing
    COMPUTERS & SECURITY, 2023, 133
  • [28] An adaptive robust defending algorithm against backdoor attacks in federated learning
    Wang, Yongkang
    Zhai, Di-Hua
    He, Yongping
    Xia, Yuanqing
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 143 : 118 - 131
  • [29] FederatedReverse: A Detection and Defense Method Against Backdoor Attacks in Federated Learning
    Zhao, Chen
    Wen, Yu
    Li, Shuailou
    Liu, Fucheng
    Meng, Dan
    PROCEEDINGS OF THE 2021 ACM WORKSHOP ON INFORMATION HIDING AND MULTIMEDIA SECURITY, IH&MMSEC 2021, 2021, : 51 - 62
  • [30] CapsuleBD: A Backdoor Attack Method Against Federated Learning Under Heterogeneous Models
    Liao, Yuying
    Zhao, Xuechen
    Zhou, Bin
    Huang, Yanyi
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 4071 - 4086