FLPurifier: Backdoor Defense in Federated Learning via Decoupled Contrastive Training

被引:5
|
作者
Zhang, Jiale [1 ]
Zhu, Chengcheng [1 ]
Sun, Xiaobing [1 ]
Ge, Chunpeng [2 ]
Chen, Bing [3 ]
Susilo, Willy [4 ]
Yu, Shui [5 ]
机构
[1] Yangzhou Univ, Sch Informat Engn, Yangzhou 225127, Peoples R China
[2] Shandong Univ, Joint SDU NTU Ctr Artificial Intelligence Res C FA, Software Sch, Jinan 250000, Peoples R China
[3] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 211106, Peoples R China
[4] Univ Wollongong, Inst Cybersecur & Cryptol, Sch Comp & Informat Technol, Wollongong, NSW 2522, Australia
[5] Univ Technol Sydney, Sch Comp Sci, Ultimo, NSW 2007, Australia
基金
中国国家自然科学基金;
关键词
Federated learning; backdoor attacks; decoupled contrastive training; adaptive classifier aggregation; POISONING ATTACKS;
D O I
10.1109/TIFS.2024.3384846
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Recent studies have demonstrated that backdoor attacks can cause a significant security threat to federated learning. Existing defense methods mainly focus on detecting or eliminating the backdoor patterns after the model is backdoored. However, these methods either cause model performance degradation or heavily rely on impractical assumptions, such as labeled clean data, which exhibit limited effectiveness in federated learning. To this end, we propose FLPurifier, a novel backdoor defense method in federated learning that can effectively purify the possible backdoor attributes before federated aggregation. Specifically, FLPurifier splits a complete model into a feature extractor and classifier, in which the extractor is trained in a decoupled contrastive manner to break the strong correlation between trigger features and the target label. Compared with existing backdoor mitigation methods, FLPurifier doesn't rely on impractical assumptions since it can effectively purify the backdoor effects in the training process rather than an already trained model. Moreover, to decrease the negative impact of backdoored classifiers and improve global model accuracy, we further design an adaptive classifier aggregation strategy to dynamically adjust the weight coefficients. Extensive experimental evaluations on six benchmark datasets demonstrate that FLPurifier is effective against known backdoor attacks in federated learning with negligible performance degradation and outperforms the state-of-the-art defense methods.
引用
收藏
页码:4752 / 4766
页数:15
相关论文
共 50 条
  • [1] Backdoor defense method in federated learning based on contrastive training
    Zhang J.
    Zhu C.
    Cheng X.
    Sun X.
    Chen B.
    Tongxin Xuebao/Journal on Communications, 45 (03): : 182 - 196
  • [2] GANcrop: A Contrastive Defense Against Backdoor Attacks in Federated Learning
    Gan, Xiaoyun
    Gan, Shanyu
    Su, Taizhi
    Liu, Peng
    2024 5TH INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKS AND INTERNET OF THINGS, CNIOT 2024, 2024, : 606 - 612
  • [3] Lockdown: Backdoor Defense for Federated Learning with Isolated Subspace Training
    Huang, Tiansheng
    Hu, Sihao
    Chow, Ka-Ho
    Ilhan, Fatih
    Tekin, Selim Furkan
    Liu, Ling
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [4] VFLIP: A Backdoor Defense for Vertical Federated Learning via Identification and Purification
    Cho, Yungi
    Han, Woorim
    Yu, Miseon
    Lee, Younghan
    Bae, Ho
    Paek, Yunheung
    COMPUTER SECURITY-ESORICS 2024, PT IV, 2024, 14985 : 291 - 312
  • [5] BayBFed: Bayesian Backdoor Defense for Federated Learning
    Kumari, Kavita
    Rieger, Phillip
    Fereidooni, Hossein
    Jadliwala, Murtuza
    Sadeghi, Ahmad-Reza
    2023 IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP, 2023, : 737 - 754
  • [6] Defense against backdoor attack in federated learning
    Lu, Shiwei
    Li, Ruihu
    Liu, Wenbin
    Chen, Xuan
    COMPUTERS & SECURITY, 2022, 121
  • [7] Federated Learning Backdoor Defense Based on Watermark Integrity
    Hou, Yinjian
    Zhao, Yancheng
    Yao, Kaiqi
    2024 10TH INTERNATIONAL CONFERENCE ON BIG DATA AND INFORMATION ANALYTICS, BIGDIA 2024, 2024, : 288 - 294
  • [8] Survey of Backdoor Attack and Defense Algorithms Based on Federated Learning
    Liu, Jialang
    Guo, Yanming
    Lao, Mingrui
    Yu, Tianyuan
    Wu, Yulun
    Feng, Yunhao
    Wu, Jiazhuang
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2024, 61 (10): : 2607 - 2626
  • [9] Contrastive Neuron Pruning for Backdoor Defense
    Feng, Yu
    Ma, Benteng
    Liu, Dongnan
    Zhang, Yanning
    Cai, Weidong
    Xia, Yong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2025, 34 : 1234 - 1245
  • [10] Backdoor Defense via Deconfounded Representation Learning
    Zhang, Zaixi
    Liu, Qi
    Wang, Zhicai
    Lu, Zepu
    Hu, Qingyong
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 12228 - 12238