Defense against backdoor attack in federated learning

被引:20
|
作者
Lu, Shiwei [1 ]
Li, Ruihu [1 ]
Liu, Wenbin [2 ]
Chen, Xuan [1 ]
机构
[1] Air Force Engn Univ, Fundamentals Dept, Xian 710077, Peoples R China
[2] Guangzhou Univ, Inst Adv Computat Sci & Technol, Guangzhou 510006, Guangdong, Peoples R China
基金
中国国家自然科学基金;
关键词
Federated learning; Model replacement attack; Adaptive backdoor attack; Model similarity measurement; Backdoor neuron activation; Abnormal model detection;
D O I
10.1016/j.cose.2022.102819
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As a new distributed machine learning framework, Federated Learning (FL) effectively solves the problems of data silo and privacy protection in the field of artificial intelligence. However, for its independent devices, heterogeneous data and unbalanced data distribution, it is more vulnerable to adversarial attack, especially backdoor attack. In this paper, we investigate typical backdoor attacks in FL, containing model replacement attack and adaptive backdoor attack. Based on attack initiating round, we divide backdoor attack into convergence-round attack and early-round attack. In addition, we respectively design a defense scheme with model pre-aggregation and similarity measurement to detect and remove backdoor model under convergence-round attack and a defense scheme with backdoor neuron activation to remove backdoor under early-round attack. Experiments and performance analysis show that compared to benchmark schemes, our defense scheme with similarity measurement obtains the highest backdoor detection accuracy under model replacement attack (25% increase) and adaptive backdoor attack (67% increase) at the convergence round. Moreover, detection effect is the most stable. Compared to defense of participant-level differential privacy and adversarial training, our defense scheme with backdoor neuron activation can rapidly remove malicious effects of backdoor without reducing the main task accuracy under early-round attack. Thus, the robustness of FL can be improved greatly with our defense schemes. We make our key codes public at Github https://github.com/lsw3130104597/Backdoor_detection. (C) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:11
相关论文
共 50 条
  • [11] BayBFed: Bayesian Backdoor Defense for Federated Learning
    Kumari, Kavita
    Rieger, Phillip
    Fereidooni, Hossein
    Jadliwala, Murtuza
    Sadeghi, Ahmad-Reza
    2023 IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP, 2023, : 737 - 754
  • [12] FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning
    Jia, Jinyuan
    Yuan, Zhuowen
    Sahabandu, Dinuka
    Niu, Luyao
    Rajabi, Arezoo
    Ramasubramanian, Bhaskar
    Li, Bo
    Poovendran, Radha
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [13] Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning
    Yang, Jie
    Zheng, Jun
    Wang, Haochen
    Li, Jiaxing
    Sun, Haipeng
    Han, Weifeng
    Jiang, Nan
    Tan, Yu-An
    SENSORS, 2023, 23 (03)
  • [14] Sniper Backdoor: Single Client Targeted Backdoor Attack in Federated Learning
    Abad, Gorka
    Paguada, Servio
    Ersoy, Oguzhan
    Picek, Stjepan
    Ramirez-Duran, Victor Julio
    Urbieta, Aitor
    2023 IEEE CONFERENCE ON SECURE AND TRUSTWORTHY MACHINE LEARNING, SATML, 2023, : 377 - 391
  • [15] FLAIR: Defense against Model Poisoning Attack in Federated Learning
    Sharma, Atul
    Chen, Wei
    Zhao, Joshua
    Qiu, Qiang
    Bagchi, Saurabh
    Chaterji, Somali
    PROCEEDINGS OF THE 2023 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, ASIA CCS 2023, 2023, : 553 - +
  • [16] LoMar: A Local Defense Against Poisoning Attack on Federated Learning
    Li, Xingyu
    Qu, Zhe
    Zhao, Shangqing
    Tang, Bo
    Lu, Zhuo
    Liu, Yao
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (01) : 437 - 450
  • [17] Backdoor Attack and Defense on Deep Learning: A Survey
    Bai, Yang
    Xing, Gaojie
    Wu, Hongyan
    Rao, Zhihong
    Ma, Chuan
    Wang, Shiping
    Liu, Xiaolei
    Zhou, Yimin
    Tang, Jiajia
    Huang, Kaijun
    Kang, Jiale
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2025, 12 (01): : 404 - 434
  • [18] Stealthy Backdoor Attack Against Federated Learning Through Frequency Domain by Backdoor Neuron Constraint and Model Camouflage
    Qiao, Yanqi
    Liu, Dazhuang
    Wang, Rui
    Liang, Kaitai
    IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2024, 14 (04) : 661 - 672
  • [19] Distributed Swift and Stealthy Backdoor Attack on Federated Learning
    Sundar, Agnideven Palanisamy
    Li, Feng
    Zou, Xukai
    Gao, Tianchong
    2022 IEEE INTERNATIONAL CONFERENCE ON NETWORKING, ARCHITECTURE AND STORAGE (NAS), 2022, : 193 - 200
  • [20] Federated learning backdoor attack detection with persistence diagram
    Ma, Zihan
    Gao, Tianchong
    COMPUTERS & SECURITY, 2024, 136