Federated Learning Backdoor Defense Based on Watermark Integrity

被引:0
|
作者
Hou, Yinjian [1 ]
Zhao, Yancheng [1 ]
Yao, Kaiqi [1 ]
机构
[1] Natl Univ Def Technol, Coll Syst Engn, Changsha, Hunan, Peoples R China
来源
2024 10TH INTERNATIONAL CONFERENCE ON BIG DATA AND INFORMATION ANALYTICS, BIGDIA 2024 | 2024年
关键词
federated learning; poisoning attacks; high poisoned proportion; watermark integrity;
D O I
10.1109/BIGDIA63733.2024.10808344
中图分类号
学科分类号
摘要
As the application of federated learning becomes widespread, security issues, especially the threat of backdoor attacks, become increasingly prominent. Current defense measures against backdoor poisoning in federated learning are less effective when the proportion of poisoned data injected by malicious participants exceeds 50%. To address this issue, we propose a backdoor defense method for federated learning based on model watermarking. This method generates an initial global model with a watermark through the aggregation server and distributes it to local servers. By using the integrity of the returned watermark, it detects malicious participants, effectively enhancing the robustness of the global model. Experiments conducted on the CIFAR-10 and Tiny-ImageNet datasets demonstrate that our method can effectively detect and defend against backdoor poisoning attacks under a high proportion of poisoned data, as well as different triggers, attack methods, and scales.
引用
收藏
页码:288 / 294
页数:7
相关论文
共 50 条
  • [41] One-Shot Backdoor Removal for Federated Learning
    Pan, Zijie
    Ying, Zuobin
    Wang, Yajie
    Zhang, Chuan
    Li, Chunhai
    Zhu, Liehuang
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (23): : 37718 - 37730
  • [42] Shadow backdoor attack: Multi-intensity backdoor attack against federated learning
    Ren, Qixian
    Zheng, Yu
    Yang, Chao
    Li, Yue
    Ma, Jianfeng
    COMPUTERS & SECURITY, 2024, 139
  • [43] Backdoor Attack and Defense on Deep Learning: A Survey
    Bai, Yang
    Xing, Gaojie
    Wu, Hongyan
    Rao, Zhihong
    Ma, Chuan
    Wang, Shiping
    Liu, Xiaolei
    Zhou, Yimin
    Tang, Jiajia
    Huang, Kaijun
    Kang, Jiale
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2025, 12 (01): : 404 - 434
  • [44] Mitigating backdoor attacks in Federated Learning based intrusion detection systems through Neuron Synaptic Weight Adjustment
    Zukaib, Umer
    Cui, Xiaohui
    KNOWLEDGE-BASED SYSTEMS, 2025, 314
  • [45] Fisher Calibration for Backdoor-Robust Heterogeneous Federated Learning
    Huang, Wenke
    Ye, Mang
    Shi, Zekun
    Du, Bo
    Tao, Dacheng
    COMPUTER VISION - ECCV 2024, PT XV, 2025, 15073 : 247 - 265
  • [46] FedPD: Defending federated prototype learning against backdoor attacks
    Tan, Zhou
    Cai, Jianping
    Li, De
    Lian, Puwei
    Liu, Ximeng
    Che, Yan
    NEURAL NETWORKS, 2025, 184
  • [47] FMDL: Federated Mutual Distillation Learning for Defending Backdoor Attacks
    Sun, Hanqi
    Zhu, Wanquan
    Sun, Ziyu
    Cao, Mingsheng
    Liu, Wenbin
    ELECTRONICS, 2023, 12 (23)
  • [48] Identifying Backdoor Attacks in Federated Learning via Anomaly Detection
    Mi, Yuxi
    Sun, Yiheng
    Guan, Jihong
    Zhou, Shuigeng
    WEB AND BIG DATA, PT III, APWEB-WAIM 2023, 2024, 14333 : 111 - 126
  • [49] Backdoor Two-Stream Video Models on Federated Learning
    Zhao, Jing
    Yang, Hongwei
    He, Hui
    Peng, Jie
    Zhang, Weizhe
    Ni, Jiangqun
    Sangaiah, Arun kumar
    Castiglione, Anielo
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (11)
  • [50] RoPE: Defending against backdoor attacks in federated learning systems
    Wang, Yongkang
    Zhai, Di-Hua
    Xia, Yuanqing
    KNOWLEDGE-BASED SYSTEMS, 2024, 293