Defending Against Data and Model Backdoor Attacks in Federated Learning

被引:1
作者
Wang, Hao [1 ,2 ,3 ]
Mu, Xuejiao [1 ,3 ]
Wang, Dong [4 ]
Xu, Qiang [5 ]
Li, Kaiju [6 ]
机构
[1] Chongqing Univ Posts & Telecommun, Minist Culture & Tourism, Key Lab Tourism Multisource Data Percept & Decis, Chongqing 400065, Peoples R China
[2] Chongqing Univ Posts & Telecommun, Key Lab Cyberspace Big Data Intelligent Secur, Minist Educ, Chongqing 400065, Peoples R China
[3] Chongqing Univ Posts & Telecommun, Coll Comp Sci & Technol, Chongqing 400065, Peoples R China
[4] Hangzhou Dianzi Univ, Sch Cyberspace, Hangzhou 310018, Peoples R China
[5] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai 200240, Peoples R China
[6] Guizhou Univ Finance & Econ, Sch Informat, Guiyang 550025, Guizhou, Peoples R China
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 24期
关键词
Data models; Training; Servers; Computational modeling; Filtering; Low-pass filters; Backdoor attack; Differential privacy; federated learning (FL); homomorphic encryption; spectrum filtering;
D O I
10.1109/JIOT.2024.3415628
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) can complete collaborative model training without transferring local data, which can greatly improve the training efficiency. However, FL is susceptible data and model backdoor attacks. To address data backdoor attack, in this article, we propose a defense method named TSF. TSF transforms data from time domain to frequency domain and subsequently designs a low-pass filter to mitigate the impact of high-frequency signals introduced by backdoor samples. Additionally, we undergo homomorphic encryption on local updates to prevent the server from inferring user's data. We also introduce a defense method against model backdoor attack named ciphertext field similarity detect differential privacy (CFSD-DP). CFSD-DP screens malicious updates using cosine similarity detection in the ciphertext domain. It perturbs the global model using differential privacy mechanism to mitigate the impact of model backdoor attack. It can effectively detect malicious updates and safeguard the privacy of the global model. Experimental results show that the proposed TSF and CFSD-DP have 73.8% degradation in backdoor accuracy while only 3% impact on the main task accuracy compared with state-of-the-art schemes. Code is available at https://github.com/whwh456/TSF.
引用
收藏
页码:39276 / 39294
页数:19
相关论文
共 50 条
  • [31] Backdoor Attack Against Split Neural Network-Based Vertical Federated Learning
    He, Ying
    Shen, Zhili
    Hua, Jingyu
    Dong, Qixuan
    Niu, Jiacheng
    Tong, Wei
    Huang, Xu
    Li, Chen
    Zhong, Sheng
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 748 - 763
  • [32] FLCert: Provably Secure Federated Learning Against Poisoning Attacks
    Cao, Xiaoyu
    Zhang, Zaixi
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 3691 - 3705
  • [33] Artemis: Defending Against Backdoor Attacks via Distribution Shift
    Xue, Meng
    Wang, Zhixian
    Zhang, Qian
    Gong, Xueluan
    Liu, Zhihang
    Chen, Yanjiao
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2025, 22 (02) : 1781 - 1795
  • [34] Distributed Backdoor Attacks in Federated Learning Generated by DynamicTriggers
    Wang, Jian
    Shen, Hong
    Liu, Xuehua
    Zhou, Hua
    Li, Yuli
    INFORMATION SECURITY THEORY AND PRACTICE, WISTP 2024, 2024, 14625 : 178 - 193
  • [35] SARS: A Personalized Federated Learning Framework Towards Fairness and Robustness against Backdoor Attacks
    Zhang, Webin
    Li, Youpeng
    An, Lingling
    Wan, Bo
    Wang, Xuyu
    PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2024, 8 (04):
  • [36] MITDBA: Mitigating Dynamic Backdoor Attacks in Federated Learning for IoT Applications
    Wang, Yongkang
    Zhai, Di-Hua
    Han, Dongyu
    Guan, Yuyin
    Xia, Yuanqing
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (06): : 10115 - 10132
  • [37] A Robust Privacy-Preserving Federated Learning Model Against Model Poisoning Attacks
    Yazdinejad, Abbas
    Dehghantanha, Ali
    Karimipour, Hadis
    Srivastava, Gautam
    Parizi, Reza M.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 6693 - 6708
  • [38] Defending Federated Learning from Backdoor Attacks: Anomaly-Aware FedAVG with Layer-Based Aggregation
    Manzoor, Habib Ullah
    Khan, Ahsan Raza
    Sher, Tahir
    Ahmad, Wasim
    Zoha, Ahmed
    2023 IEEE 34TH ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS, PIMRC, 2023,
  • [39] One-Shot Backdoor Removal for Federated Learning
    Pan, Zijie
    Ying, Zuobin
    Wang, Yajie
    Zhang, Chuan
    Li, Chunhai
    Zhu, Liehuang
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (23): : 37718 - 37730
  • [40] Evaluating Security and Robustness for Split Federated Learning Against Poisoning Attacks
    Wu, Xiaodong
    Yuan, Henry
    Li, Xiangman
    Ni, Jianbing
    Lu, Rongxing
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 175 - 190