Defending Against Data and Model Backdoor Attacks in Federated Learning

被引:1
作者
Wang, Hao [1 ,2 ,3 ]
Mu, Xuejiao [1 ,3 ]
Wang, Dong [4 ]
Xu, Qiang [5 ]
Li, Kaiju [6 ]
机构
[1] Chongqing Univ Posts & Telecommun, Minist Culture & Tourism, Key Lab Tourism Multisource Data Percept & Decis, Chongqing 400065, Peoples R China
[2] Chongqing Univ Posts & Telecommun, Key Lab Cyberspace Big Data Intelligent Secur, Minist Educ, Chongqing 400065, Peoples R China
[3] Chongqing Univ Posts & Telecommun, Coll Comp Sci & Technol, Chongqing 400065, Peoples R China
[4] Hangzhou Dianzi Univ, Sch Cyberspace, Hangzhou 310018, Peoples R China
[5] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai 200240, Peoples R China
[6] Guizhou Univ Finance & Econ, Sch Informat, Guiyang 550025, Guizhou, Peoples R China
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 24期
关键词
Data models; Training; Servers; Computational modeling; Filtering; Low-pass filters; Backdoor attack; Differential privacy; federated learning (FL); homomorphic encryption; spectrum filtering;
D O I
10.1109/JIOT.2024.3415628
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) can complete collaborative model training without transferring local data, which can greatly improve the training efficiency. However, FL is susceptible data and model backdoor attacks. To address data backdoor attack, in this article, we propose a defense method named TSF. TSF transforms data from time domain to frequency domain and subsequently designs a low-pass filter to mitigate the impact of high-frequency signals introduced by backdoor samples. Additionally, we undergo homomorphic encryption on local updates to prevent the server from inferring user's data. We also introduce a defense method against model backdoor attack named ciphertext field similarity detect differential privacy (CFSD-DP). CFSD-DP screens malicious updates using cosine similarity detection in the ciphertext domain. It perturbs the global model using differential privacy mechanism to mitigate the impact of model backdoor attack. It can effectively detect malicious updates and safeguard the privacy of the global model. Experimental results show that the proposed TSF and CFSD-DP have 73.8% degradation in backdoor accuracy while only 3% impact on the main task accuracy compared with state-of-the-art schemes. Code is available at https://github.com/whwh456/TSF.
引用
收藏
页码:39276 / 39294
页数:19
相关论文
共 50 条
  • [41] Facilitating Early-Stage Backdoor Attacks in Federated Learning With Whole Population Distribution Inference
    Liu, Tian
    Hu, Xueyang
    Shu, Tao
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (12) : 10385 - 10399
  • [42] Backdoor Attack to Giant Model in Fragment-Sharing Federated Learning
    Qi, Senmao
    Ma, Hao
    Zou, Yifei
    Yuan, Yuan
    Xie, Zhenzhen
    Li, Peng
    Cheng, Xiuzhen
    BIG DATA MINING AND ANALYTICS, 2024, 7 (04): : 1084 - 1097
  • [43] Identifying Backdoor Attacks in Federated Learning via Anomaly Detection
    Mi, Yuxi
    Sun, Yiheng
    Guan, Jihong
    Zhou, Shuigeng
    WEB AND BIG DATA, PT III, APWEB-WAIM 2023, 2024, 14333 : 111 - 126
  • [44] Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
    Goldblum, Micah
    Tsipras, Dimitris
    Xie, Chulin
    Chen, Xinyun
    Schwarzschild, Avi
    Song, Dawn
    Madry, Aleksander
    Li, Bo
    Goldstein, Tom
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (02) : 1563 - 1580
  • [45] Defending against Poisoning Attacks in Federated Learning from a Spatial-temporal Perspective
    Gu, Zhipin
    Shi, Jiangyong
    Yang, Yuexiang
    He, Liangzhong
    2023 42ND INTERNATIONAL SYMPOSIUM ON RELIABLE DISTRIBUTED SYSTEMS, SRDS 2023, 2023, : 25 - 34
  • [46] Backdoor Attacks to Deep Learning Models and Countermeasures: A Survey
    Li, Yudong
    Zhang, Shigeng
    Wang, Weiping
    Song, Hong
    IEEE OPEN JOURNAL OF THE COMPUTER SOCIETY, 2023, 4 : 134 - 146
  • [47] A Verifiable Privacy-Preserving Federated Learning Framework Against Collusion Attacks
    Chen, Yange
    He, Suyu
    Wang, Baocang
    Feng, Zhanshen
    Zhu, Guanghui
    Tian, Zhihong
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2025, 24 (05) : 3918 - 3934
  • [48] SecFFT: Safeguarding Federated Fine-Tuning for Large Vision Language Models Against Covert Backdoor Attacks in IoRT Networks
    Zhou, Zan
    Xu, Changqiao
    Wang, Bo
    Li, Tengfei
    Huang, Sizhe
    Yang, Shujie
    Yao, Su
    IEEE INTERNET OF THINGS JOURNAL, 2025, 12 (09): : 11383 - 11396
  • [49] Backdoor attacks against distributed swarm learning
    Chen, Kongyang
    Zhang, Huaiyuan
    Feng, Xiangyu
    Zhang, Xiaoting
    Mi, Bing
    Jin, Zhiping
    ISA TRANSACTIONS, 2023, 141 : 59 - 72
  • [50] Regulated Federated Learning Against the Effects of Heterogeneity and Client Attacks
    Hu, Fei
    Zhou, Wuneng
    Liao, Kaili
    Li, Hongliang
    Tong, Dongbing
    IEEE INTELLIGENT SYSTEMS, 2024, 39 (06) : 28 - 39