Defense against backdoor attack in federated learning

被引:20
|
作者
Lu, Shiwei [1 ]
Li, Ruihu [1 ]
Liu, Wenbin [2 ]
Chen, Xuan [1 ]
机构
[1] Air Force Engn Univ, Fundamentals Dept, Xian 710077, Peoples R China
[2] Guangzhou Univ, Inst Adv Computat Sci & Technol, Guangzhou 510006, Guangdong, Peoples R China
基金
中国国家自然科学基金;
关键词
Federated learning; Model replacement attack; Adaptive backdoor attack; Model similarity measurement; Backdoor neuron activation; Abnormal model detection;
D O I
10.1016/j.cose.2022.102819
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As a new distributed machine learning framework, Federated Learning (FL) effectively solves the problems of data silo and privacy protection in the field of artificial intelligence. However, for its independent devices, heterogeneous data and unbalanced data distribution, it is more vulnerable to adversarial attack, especially backdoor attack. In this paper, we investigate typical backdoor attacks in FL, containing model replacement attack and adaptive backdoor attack. Based on attack initiating round, we divide backdoor attack into convergence-round attack and early-round attack. In addition, we respectively design a defense scheme with model pre-aggregation and similarity measurement to detect and remove backdoor model under convergence-round attack and a defense scheme with backdoor neuron activation to remove backdoor under early-round attack. Experiments and performance analysis show that compared to benchmark schemes, our defense scheme with similarity measurement obtains the highest backdoor detection accuracy under model replacement attack (25% increase) and adaptive backdoor attack (67% increase) at the convergence round. Moreover, detection effect is the most stable. Compared to defense of participant-level differential privacy and adversarial training, our defense scheme with backdoor neuron activation can rapidly remove malicious effects of backdoor without reducing the main task accuracy under early-round attack. Thus, the robustness of FL can be improved greatly with our defense schemes. We make our key codes public at Github https://github.com/lsw3130104597/Backdoor_detection. (C) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Shadow backdoor attack: Multi-intensity backdoor attack against federated learning
    Ren, Qixian
    Zheng, Yu
    Yang, Chao
    Li, Yue
    Ma, Jianfeng
    COMPUTERS & SECURITY, 2024, 139
  • [2] Survey of Backdoor Attack and Defense Algorithms Based on Federated Learning
    Liu, Jialang
    Guo, Yanming
    Lao, Mingrui
    Yu, Tianyuan
    Wu, Yulun
    Feng, Yunhao
    Wu, Jiazhuang
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2024, 61 (10): : 2607 - 2626
  • [3] Backdoor Attack and Defense in Asynchronous Federated Learning for Multiple Unmanned Vehicles
    Wang, Kehao
    Zhang, Hao
    2024 3RD CONFERENCE ON FULLY ACTUATED SYSTEM THEORY AND APPLICATIONS, FASTA 2024, 2024, : 843 - 847
  • [4] Backdoor Attack Defense Method for Federated Learning Based on Model Watermarking
    Guo J.-J.
    Liu J.-Z.
    Ma Y.
    Liu Z.-Q.
    Xiong Y.-P.
    Miao K.
    Li J.-X.
    Ma J.-F.
    Jisuanji Xuebao/Chinese Journal of Computers, 2024, 47 (03): : 662 - 676
  • [5] DAGUARD: distributed backdoor attack defense scheme under federated learning
    Yu S.
    Chen Z.
    Chen Z.
    Liu X.
    Tongxin Xuebao/Journal on Communications, 2023, 44 (05): : 110 - 122
  • [6] GANcrop: A Contrastive Defense Against Backdoor Attacks in Federated Learning
    Gan, Xiaoyun
    Gan, Shanyu
    Su, Taizhi
    Liu, Peng
    2024 5TH INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKS AND INTERNET OF THINGS, CNIOT 2024, 2024, : 606 - 612
  • [7] BADFL: Backdoor Attack Defense in Federated Learning From Local Model Perspective
    Zhang, Haiyan
    Li, Xinghua
    Xu, Mengfan
    Liu, Ximeng
    Wu, Tong
    Weng, Jian
    Deng, Robert H.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (11) : 5661 - 5674
  • [8] Dual-domain based backdoor attack against federated learning
    Li, Guorui
    Chang, Runxing
    Wang, Ying
    Wang, Cong
    NEUROCOMPUTING, 2025, 623
  • [9] Poisoning with Cerberus: Stealthy and Colluded Backdoor Attack against Federated Learning
    Lyu, Xiaoting
    Han, Yufei
    Wang, Wei
    Liu, Jingkai
    Wang, Bin
    Liu, Jiqiang
    Zhang, Xiangliang
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 7, 2023, : 9020 - 9028
  • [10] FederatedReverse: A Detection and Defense Method Against Backdoor Attacks in Federated Learning
    Zhao, Chen
    Wen, Yu
    Li, Shuailou
    Liu, Fucheng
    Meng, Dan
    PROCEEDINGS OF THE 2021 ACM WORKSHOP ON INFORMATION HIDING AND MULTIMEDIA SECURITY, IH&MMSEC 2021, 2021, : 51 - 62