Mitigating Distributed Backdoor Attack in Federated Learning Through Mode Connectivity

被引:0
|
作者
Walter, Kane [1 ]
Mohammady, Meisam [2 ]
Nepal, Surya [3 ]
Kanhere, Salil S. [1 ]
机构
[1] Univ New South Wales, Sydney, NSW, Australia
[2] Iowa State Univ, Ames, IA USA
[3] CSIRO, Data61, Sydney, NSW, Australia
关键词
Federated Learning; Backdoor Attack; Mode Connectivity;
D O I
10.1145/3634737.3637682
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) is a privacy-preserving, collaborative machine learning technique where multiple clients train a shared model on their private datasets without sharing the data. While offering advantages, FL is susceptible to backdoor attacks, where attackers insert malicious model updates into the model aggregation process. Compromised models predict attacker-chosen targets when presented with specific attacker-defined inputs. Backdoor defences generally rely on anomaly detection techniques based on Differential Privacy (DP) or require legitimate clean test examples at the server. Anomaly detection-based defences can be defeated by stealth techniques and generally require inspection of clientsubmitted model updates. DP-based approaches tend to degrade the performance of the trained model due to excessive noise addition during training. Methods that require legitimate clean data on the server require strong assumptions about the task and may not be applicable in real-world settings. In this work, we view the question of backdoor attack robustness through the lens of loss function optimal points to build a defence that overcomes these limitations. We propose Mode Connectivity Based Federated Learning (MCFL), which leverages the recently discovered property of neural network loss surfaces, mode connectivity. We simulate backdoor attack scenarios using computer vision benchmark datasets, including CIFAR10, Fashion MNIST, MNIST, and Federated EMNIST. Our findings show that MCFL converges to high-quality models and effectively mitigates backdoor attacks relative to baseline defences from the literature without requiring inspection of client model updates or assuming clean data at the server.
引用
收藏
页码:1287 / 1298
页数:12
相关论文
共 50 条
  • [41] Breaking Distributed Backdoor Defenses for Federated Learning in Non-IID Settings
    Yang, Jijia
    Shu, Jiangang
    Jia, Xiaohua
    2022 18TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING, MSN, 2022, : 347 - 354
  • [42] Resisting Distributed Backdoor Attacks in Federated Learning: A Dynamic Norm Clipping Approach
    Guo, Yifan
    Wang, Qianlong
    Ji, Tianxi
    Wang, Xufei
    Li, Pan
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 1172 - 1182
  • [43] Unlearning Backdoor Attacks in Federated Learning
    Wu, Chen
    Zhu, Sencun
    Mitra, Prasenjit
    Wang, Wei
    2024 IEEE CONFERENCE ON COMMUNICATIONS AND NETWORK SECURITY, CNS 2024, 2024,
  • [44] On the Vulnerability of Backdoor Defenses for Federated Learning
    Fang, Pei
    Chen, Jinghui
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 10, 2023, : 11800 - 11808
  • [45] Mitigating cross-client GANs-based attack in federated learning
    Huang, Hong
    Lei, Xinyu
    Xiang, Tao
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (04) : 10925 - 10949
  • [46] Mitigating cross-client GANs-based attack in federated learning
    Hong Huang
    Xinyu Lei
    Tao Xiang
    Multimedia Tools and Applications, 2024, 83 : 10925 - 10949
  • [47] STEALTHY BACKDOOR ATTACK TOWARDS FEDERATED AUTOMATIC SPEAKER VERIFICATION
    Zhang, Longling
    Liu, Lyqi
    Meng, Dan
    Wang, Jun
    Hu, Shengshan
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 1311 - 1315
  • [48] Backdoor Federated Learning by Poisoning Key Parameters
    Song, Xuan
    Li, Huibin
    Hu, Kailang
    Zai, Guangjun
    ELECTRONICS, 2025, 14 (01):
  • [49] BayBFed: Bayesian Backdoor Defense for Federated Learning
    Kumari, Kavita
    Rieger, Phillip
    Fereidooni, Hossein
    Jadliwala, Murtuza
    Sadeghi, Ahmad-Reza
    2023 IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP, 2023, : 737 - 754
  • [50] Federated Learning Watermark Based on Model Backdoor
    Li X.
    Deng T.-P.
    Xiong J.-B.
    Jin B.
    Lin J.
    Ruan Jian Xue Bao/Journal of Software, 2024, 35 (07): : 3454 - 3468