Defending Against Data and Model Backdoor Attacks in Federated Learning

被引:1
作者
Wang, Hao [1 ,2 ,3 ]
Mu, Xuejiao [1 ,3 ]
Wang, Dong [4 ]
Xu, Qiang [5 ]
Li, Kaiju [6 ]
机构
[1] Chongqing Univ Posts & Telecommun, Minist Culture & Tourism, Key Lab Tourism Multisource Data Percept & Decis, Chongqing 400065, Peoples R China
[2] Chongqing Univ Posts & Telecommun, Key Lab Cyberspace Big Data Intelligent Secur, Minist Educ, Chongqing 400065, Peoples R China
[3] Chongqing Univ Posts & Telecommun, Coll Comp Sci & Technol, Chongqing 400065, Peoples R China
[4] Hangzhou Dianzi Univ, Sch Cyberspace, Hangzhou 310018, Peoples R China
[5] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai 200240, Peoples R China
[6] Guizhou Univ Finance & Econ, Sch Informat, Guiyang 550025, Guizhou, Peoples R China
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 24期
关键词
Data models; Training; Servers; Computational modeling; Filtering; Low-pass filters; Backdoor attack; Differential privacy; federated learning (FL); homomorphic encryption; spectrum filtering;
D O I
10.1109/JIOT.2024.3415628
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) can complete collaborative model training without transferring local data, which can greatly improve the training efficiency. However, FL is susceptible data and model backdoor attacks. To address data backdoor attack, in this article, we propose a defense method named TSF. TSF transforms data from time domain to frequency domain and subsequently designs a low-pass filter to mitigate the impact of high-frequency signals introduced by backdoor samples. Additionally, we undergo homomorphic encryption on local updates to prevent the server from inferring user's data. We also introduce a defense method against model backdoor attack named ciphertext field similarity detect differential privacy (CFSD-DP). CFSD-DP screens malicious updates using cosine similarity detection in the ciphertext domain. It perturbs the global model using differential privacy mechanism to mitigate the impact of model backdoor attack. It can effectively detect malicious updates and safeguard the privacy of the global model. Experimental results show that the proposed TSF and CFSD-DP have 73.8% degradation in backdoor accuracy while only 3% impact on the main task accuracy compared with state-of-the-art schemes. Code is available at https://github.com/whwh456/TSF.
引用
收藏
页码:39276 / 39294
页数:19
相关论文
共 50 条
  • [21] BadCleaner: Defending Backdoor Attacks in Federated Learning via Attention-Based Multi-Teacher Distillation
    Zhang, Jiale
    Zhu, Chengcheng
    Ge, Chunpeng
    Ma, Chuan
    Zhao, Yanchao
    Sun, Xiaobing
    Chen, Bing
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (05) : 4559 - 4573
  • [22] A Blockchain-Based Federated-Learning Framework for Defense against Backdoor Attacks
    Li, Lu
    Qin, Jiwei
    Luo, Jintao
    ELECTRONICS, 2023, 12 (11)
  • [23] A Differentially Private Federated Learning Model Against Poisoning Attacks in Edge Computing
    Zhou, Jun
    Wu, Nan
    Wang, Yisong
    Gu, Shouzhen
    Cao, Zhenfu
    Dong, Xiaolei
    Choo, Kim-Kwang Raymond
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (03) : 1941 - 1958
  • [24] OQFL: An Optimized Quantum-Based Federated Learning Framework for Defending Against Adversarial Attacks in Intelligent Transportation Systems
    Yamany, Waleed
    Moustafa, Nour
    Turnbull, Benjamin
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (01) : 893 - 903
  • [25] CapsuleBD: A Backdoor Attack Method Against Federated Learning Under Heterogeneous Models
    Liao, Yuying
    Zhao, Xuechen
    Zhou, Bin
    Huang, Yanyi
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 4071 - 4086
  • [26] SBPA: Sybil-Based Backdoor Poisoning Attacks for Distributed Big Data in AIoT-Based Federated Learning System
    Xiao, Xiong
    Tang, Zhuo
    Li, Chuanying
    Jiang, Bingting
    Li, Kenli
    IEEE TRANSACTIONS ON BIG DATA, 2024, 10 (06) : 827 - 838
  • [27] Defending Batch-Level Label Inference and Replacement Attacks in Vertical Federated Learning
    Zou, Tianyuan
    Liu, Yang
    Kang, Yan
    Liu, Wenhan
    He, Yuanqin
    Yi, Zhihao
    Yang, Qiang
    Zhang, Ya-Qin
    IEEE TRANSACTIONS ON BIG DATA, 2024, 10 (06) : 1016 - 1027
  • [28] Collusive Backdoor Attacks in Federated Learning Frameworks for IoT Systems
    Alharbi, Saier
    Guo, Yifan
    Yu, Wei
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (11): : 19694 - 19707
  • [29] PoisonGAN: Generative Poisoning Attacks Against Federated Learning in Edge Computing Systems
    Zhang, Jiale
    Chen, Bing
    Cheng, Xiang
    Huynh Thi Thanh Binh
    Yu, Shui
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (05) : 3310 - 3322
  • [30] PerVK : A Robust Personalized Federated Framework to Defend Against Backdoor Attacks for IoT Applications
    Wang, Yongkang
    Zhai, Di-Hua
    Xia, Yuanqing
    Liu, Danyang
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (03) : 4930 - 4939