Towards defending adaptive backdoor attacks in Federated Learning

被引:0
|
作者
Yang, Han [1 ]
Gu, Dongbing [1 ]
He, Jianhua [1 ]
机构
[1] Univ Essex, Dept Comp Sci & Elect Engn, Colchester, Essex, England
来源
ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS | 2023年
关键词
Deep Learning; Federated Learning; Backdoor attack; Model Poisoning;
D O I
10.1109/ICC45041.2023.10279267
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Federated learning (FL) is an efficient, scalable, and privacy-preserving technology in which clients collaborate on machine learning or deep learning model training. However, malicious clients can send poisoned model updates to the central server without being identified, which makes FL vulnerable to backdoor attacks. In this work, we propose a novel defence approach, FLSec, to mitigate backdoor attacks caused by adversarial local model updates. FLSec utilizes an original measurement, GradScore, computed from the loss gradient norm of the final layer of the local models for backdoor defence. We show that GradScore is efficient and robust in identifying malicious model updates through analysis and experiments. Our extensive evaluation also demonstrates FLSec is highly effective in mitigating three state-of-the-art backdoor attacks on well-known datasets, MNIST, LOAN, and CIFAR-10. The accuracy on a benign dataset with the proposed defence approach is nearly unchanged, with the accuracy on the backdoor dataset being reduced to 0%. In addition, our experiments show that FLSec significantly outperforms existing backdoor defences in multi-round backdoor attacks.
引用
收藏
页码:5078 / 5084
页数:7
相关论文
共 50 条
  • [1] An adaptive robust defending algorithm against backdoor attacks in federated learning
    Wang, Yongkang
    Zhai, Di-Hua
    He, Yongping
    Xia, Yuanqing
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 143 : 118 - 131
  • [2] FedPD: Defending federated prototype learning against backdoor attacks
    Tan, Zhou
    Cai, Jianping
    Li, De
    Lian, Puwei
    Liu, Ximeng
    Che, Yan
    NEURAL NETWORKS, 2025, 184
  • [3] FMDL: Federated Mutual Distillation Learning for Defending Backdoor Attacks
    Sun, Hanqi
    Zhu, Wanquan
    Sun, Ziyu
    Cao, Mingsheng
    Liu, Wenbin
    ELECTRONICS, 2023, 12 (23)
  • [4] RoPE: Defending against backdoor attacks in federated learning systems
    Wang, Yongkang
    Zhai, Di-Hua
    Xia, Yuanqing
    KNOWLEDGE-BASED SYSTEMS, 2024, 293
  • [5] DEFENDING AGAINST BACKDOOR ATTACKS IN FEDERATED LEARNING WITH DIFFERENTIAL PRIVACY
    Miao, Lu
    Yang, Wei
    Hu, Rong
    Li, Lu
    Huang, Liusheng
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2999 - 3003
  • [6] Defending Against Data and Model Backdoor Attacks in Federated Learning
    Wang, Hao
    Mu, Xuejiao
    Wang, Dong
    Xu, Qiang
    Li, Kaiju
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (24): : 39276 - 39294
  • [7] Defending against Poisoning Backdoor Attacks on Federated Meta-learning
    Chen, Chien-Lun
    Babakniya, Sara
    Paolieri, Marco
    Golubchik, Leana
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (05)
  • [8] Towards Practical Backdoor Attacks on Federated Learning Systems
    Shi, Chenghui
    Ji, Shouling
    Pan, Xudong
    Zhang, Xuhong
    Zhang, Mi
    Yang, Min
    Zhou, Jun
    Yin, Jianwei
    Wang, Ting
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (06) : 5431 - 5447
  • [9] IBA: Towards Irreversible Backdoor Attacks in Federated Learning
    Dung Thuy Nguyen
    Tuan Nguyen
    Tuan Anh Tran
    Doan, Khoa D.
    Wong, Kok-Seng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [10] Invariant Aggregator for Defending against Federated Backdoor Attacks
    Wang, Xiaoyang
    Dimitriadis, Dimitrios
    Koyejo, Sanmi
    Tople, Shruti
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238