Mitigating Distributed Backdoor Attack in Federated Learning Through Mode Connectivity

被引:0
|
作者
Walter, Kane [1 ]
Mohammady, Meisam [2 ]
Nepal, Surya [3 ]
Kanhere, Salil S. [1 ]
机构
[1] Univ New South Wales, Sydney, NSW, Australia
[2] Iowa State Univ, Ames, IA USA
[3] CSIRO, Data61, Sydney, NSW, Australia
关键词
Federated Learning; Backdoor Attack; Mode Connectivity;
D O I
10.1145/3634737.3637682
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) is a privacy-preserving, collaborative machine learning technique where multiple clients train a shared model on their private datasets without sharing the data. While offering advantages, FL is susceptible to backdoor attacks, where attackers insert malicious model updates into the model aggregation process. Compromised models predict attacker-chosen targets when presented with specific attacker-defined inputs. Backdoor defences generally rely on anomaly detection techniques based on Differential Privacy (DP) or require legitimate clean test examples at the server. Anomaly detection-based defences can be defeated by stealth techniques and generally require inspection of clientsubmitted model updates. DP-based approaches tend to degrade the performance of the trained model due to excessive noise addition during training. Methods that require legitimate clean data on the server require strong assumptions about the task and may not be applicable in real-world settings. In this work, we view the question of backdoor attack robustness through the lens of loss function optimal points to build a defence that overcomes these limitations. We propose Mode Connectivity Based Federated Learning (MCFL), which leverages the recently discovered property of neural network loss surfaces, mode connectivity. We simulate backdoor attack scenarios using computer vision benchmark datasets, including CIFAR10, Fashion MNIST, MNIST, and Federated EMNIST. Our findings show that MCFL converges to high-quality models and effectively mitigates backdoor attacks relative to baseline defences from the literature without requiring inspection of client model updates or assuming clean data at the server.
引用
收藏
页码:1287 / 1298
页数:12
相关论文
共 50 条
  • [21] SCFL: Mitigating backdoor attacks in federated learning based on SVD and clustering 
    Wang, Yongkang
    Zhai, Di-Hua
    Xia, Yuanqing
    COMPUTERS & SECURITY, 2023, 133
  • [22] Never Too Late: Tracing and Mitigating Backdoor Attacks in Federated Learning
    Zeng, Hui
    Zhou, Tongqing
    Wu, Xinyi
    Cai, Zhiping
    2022 41ST INTERNATIONAL SYMPOSIUM ON RELIABLE DISTRIBUTED SYSTEMS (SRDS 2022), 2022, : 69 - 81
  • [23] Sample-independent federated learning backdoor attack in speaker recognition
    Weida Xu
    Yang Xu
    Sicong Zhang
    Cluster Computing, 2025, 28 (3)
  • [24] Backdoor Attack and Defense in Asynchronous Federated Learning for Multiple Unmanned Vehicles
    Wang, Kehao
    Zhang, Hao
    2024 3RD CONFERENCE ON FULLY ACTUATED SYSTEM THEORY AND APPLICATIONS, FASTA 2024, 2024, : 843 - 847
  • [25] Backdoor Attack Defense Method for Federated Learning Based on Model Watermarking
    Guo J.-J.
    Liu J.-Z.
    Ma Y.
    Liu Z.-Q.
    Xiong Y.-P.
    Miao K.
    Li J.-X.
    Ma J.-F.
    Jisuanji Xuebao/Chinese Journal of Computers, 2024, 47 (03): : 662 - 676
  • [26] Dual-domain based backdoor attack against federated learning
    Li, Guorui
    Chang, Runxing
    Wang, Ying
    Wang, Cong
    NEUROCOMPUTING, 2025, 623
  • [27] Federated Learning Backdoor Attack Scheme Based on Generative Adversarial Network
    Chen D.
    Fu A.
    Zhou C.
    Chen Z.
    Fu, Anmin (fuam@njust.edu.cn); Fu, Anmin (fuam@njust.edu.cn), 1600, Science Press (58): : 2364 - 2373
  • [28] Backdoor Attack to Giant Model in Fragment-Sharing Federated Learning
    Qi, Senmao
    Ma, Hao
    Zou, Yifei
    Yuan, Yuan
    Xie, Zhenzhen
    Li, Peng
    Cheng, Xiuzhen
    BIG DATA MINING AND ANALYTICS, 2024, 7 (04): : 1084 - 1097
  • [29] Poisoning with Cerberus: Stealthy and Colluded Backdoor Attack against Federated Learning
    Lyu, Xiaoting
    Han, Yufei
    Wang, Wei
    Liu, Jingkai
    Wang, Bin
    Liu, Jiqiang
    Zhang, Xiangliang
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 7, 2023, : 9020 - 9028
  • [30] Mitigating backdoor attacks in Federated Learning based intrusion detection systems through Neuron Synaptic Weight Adjustment
    Zukaib, Umer
    Cui, Xiaohui
    KNOWLEDGE-BASED SYSTEMS, 2025, 314