Federated Learning Backdoor Defense Based on Watermark Integrity

被引:0
|
作者
Hou, Yinjian [1 ]
Zhao, Yancheng [1 ]
Yao, Kaiqi [1 ]
机构
[1] Natl Univ Def Technol, Coll Syst Engn, Changsha, Hunan, Peoples R China
来源
2024 10TH INTERNATIONAL CONFERENCE ON BIG DATA AND INFORMATION ANALYTICS, BIGDIA 2024 | 2024年
关键词
federated learning; poisoning attacks; high poisoned proportion; watermark integrity;
D O I
10.1109/BIGDIA63733.2024.10808344
中图分类号
学科分类号
摘要
As the application of federated learning becomes widespread, security issues, especially the threat of backdoor attacks, become increasingly prominent. Current defense measures against backdoor poisoning in federated learning are less effective when the proportion of poisoned data injected by malicious participants exceeds 50%. To address this issue, we propose a backdoor defense method for federated learning based on model watermarking. This method generates an initial global model with a watermark through the aggregation server and distributes it to local servers. By using the integrity of the returned watermark, it detects malicious participants, effectively enhancing the robustness of the global model. Experiments conducted on the CIFAR-10 and Tiny-ImageNet datasets demonstrate that our method can effectively detect and defend against backdoor poisoning attacks under a high proportion of poisoned data, as well as different triggers, attack methods, and scales.
引用
收藏
页码:288 / 294
页数:7
相关论文
共 50 条
  • [31] Distributed Backdoor Attacks in Federated Learning Generated by DynamicTriggers
    Wang, Jian
    Shen, Hong
    Liu, Xuehua
    Zhou, Hua
    Li, Yuli
    INFORMATION SECURITY THEORY AND PRACTICE, WISTP 2024, 2024, 14625 : 178 - 193
  • [32] FLEDGE: Ledger-based Federated Learning Resilient to Inference and Backdoor Attacks
    Castillo, Jorge
    Rieger, Phillip
    Fereidooni, Hossein
    Chen, Qian
    Sadeghi, Ahmad-Reza
    39TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2023, 2023, : 647 - 661
  • [33] Scope: On Detecting Constrained Backdoor Attacks in Federated Learning
    Huang, Siquan
    Li, Yijiang
    Yan, Xingfu
    Gao, Ying
    Chen, Chong
    Shi, Leyu
    Chen, Biao
    Ng, Wing W. Y.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 3302 - 3315
  • [34] Federated learning backdoor attack detection with persistence diagram
    Ma, Zihan
    Gao, Tianchong
    COMPUTERS & SECURITY, 2024, 136
  • [35] SAFELearning: Secure Aggregation in Federated Learning With Backdoor Detectability
    Zhang, Zhuosheng
    Li, Jiarui
    Yu, Shucheng
    Makaya, Christian
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 3289 - 3304
  • [36] FLARE: A Backdoor Attack to Federated Learning with Refined Evasion
    Wang, Qingya
    Wu, Yi
    Xuan, Haojun
    Wu, Huishu
    MATHEMATICS, 2024, 12 (23)
  • [37] Efficient and Secure Federated Learning Against Backdoor Attacks
    Miao, Yinbin
    Xie, Rongpeng
    Li, Xinghua
    Liu, Zhiquan
    Choo, Kim-Kwang Raymond
    Deng, Robert H.
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (05) : 4619 - 4636
  • [38] Towards Practical Backdoor Attacks on Federated Learning Systems
    Shi, Chenghui
    Ji, Shouling
    Pan, Xudong
    Zhang, Xuhong
    Zhang, Mi
    Yang, Min
    Zhou, Jun
    Yin, Jianwei
    Wang, Ting
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (06) : 5431 - 5447
  • [39] Towards defending adaptive backdoor attacks in Federated Learning
    Yang, Han
    Gu, Dongbing
    He, Jianhua
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 5078 - 5084
  • [40] Backdoor Attacks in Peer-to-Peer Federated Learning
    Syros, Georgios
    Yar, Gokberk
    Boboila, Simona
    Nita-Rotaru, Cristina
    Oprea, Alina
    ACM TRANSACTIONS ON PRIVACY AND SECURITY, 2025, 28 (01)