Mitigating Distributed Backdoor Attack in Federated Learning Through Mode Connectivity

被引:0
|
作者
Walter, Kane [1 ]
Mohammady, Meisam [2 ]
Nepal, Surya [3 ]
Kanhere, Salil S. [1 ]
机构
[1] Univ New South Wales, Sydney, NSW, Australia
[2] Iowa State Univ, Ames, IA USA
[3] CSIRO, Data61, Sydney, NSW, Australia
关键词
Federated Learning; Backdoor Attack; Mode Connectivity;
D O I
10.1145/3634737.3637682
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) is a privacy-preserving, collaborative machine learning technique where multiple clients train a shared model on their private datasets without sharing the data. While offering advantages, FL is susceptible to backdoor attacks, where attackers insert malicious model updates into the model aggregation process. Compromised models predict attacker-chosen targets when presented with specific attacker-defined inputs. Backdoor defences generally rely on anomaly detection techniques based on Differential Privacy (DP) or require legitimate clean test examples at the server. Anomaly detection-based defences can be defeated by stealth techniques and generally require inspection of clientsubmitted model updates. DP-based approaches tend to degrade the performance of the trained model due to excessive noise addition during training. Methods that require legitimate clean data on the server require strong assumptions about the task and may not be applicable in real-world settings. In this work, we view the question of backdoor attack robustness through the lens of loss function optimal points to build a defence that overcomes these limitations. We propose Mode Connectivity Based Federated Learning (MCFL), which leverages the recently discovered property of neural network loss surfaces, mode connectivity. We simulate backdoor attack scenarios using computer vision benchmark datasets, including CIFAR10, Fashion MNIST, MNIST, and Federated EMNIST. Our findings show that MCFL converges to high-quality models and effectively mitigates backdoor attacks relative to baseline defences from the literature without requiring inspection of client model updates or assuming clean data at the server.
引用
收藏
页码:1287 / 1298
页数:12
相关论文
共 50 条
  • [31] Understanding Distributed Poisoning Attack in Federated Learning
    Cao, Di
    Chang, Shan
    Lin, Zhijian
    Liu, Guohua
    Sunt, Donghong
    2019 IEEE 25TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2019, : 233 - 239
  • [32] BADFL: Backdoor Attack Defense in Federated Learning From Local Model Perspective
    Zhang, Haiyan
    Li, Xinghua
    Xu, Mengfan
    Liu, Ximeng
    Wu, Tong
    Weng, Jian
    Deng, Robert H.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (11) : 5661 - 5674
  • [33] A Practical Clean -Label Backdoor Attack with Limited Information in Vertical Federated Learning
    Chen, Peng
    Yang, Jirui
    Lin, Junxiong
    Lu, Zhihui
    Duan, Qiang
    Chai, Hongfeng
    23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING, ICDM 2023, 2023, : 41 - 50
  • [34] How To Backdoor Federated Learning
    Bagdasaryan, Eugene
    Veit, Andreas
    Hua, Yiqing
    Estrin, Deborah
    Shmatikov, Vitaly
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 2938 - 2947
  • [35] Mode Connectivity in Federated Learning with Data Heterogeneity
    Zhou, Tailin
    Zhang, Jun
    Tsang, Danny H. K.
    FIFTY-SEVENTH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, IEEECONF, 2023, : 1600 - 1604
  • [36] Efficient and persistent backdoor attack by boundary trigger set constructing against federated learning
    Yang, Deshan
    Luo, Senlin
    Zhou, Jinjie
    Pan, Limin
    Yang, Xiaonan
    Xing, Jiyuan
    INFORMATION SCIENCES, 2023, 651
  • [37] Evil vs evil: using adversarial examples to against backdoor attack in federated learning
    Liu, Tao
    Li, Mingjun
    Zheng, Haibin
    Ming, Zhaoyan
    Chen, Jinyin
    MULTIMEDIA SYSTEMS, 2023, 29 (02) : 553 - 568
  • [38] Backdoor Attack Against Split Neural Network-Based Vertical Federated Learning
    He, Ying
    Shen, Zhili
    Hua, Jingyu
    Dong, Qixuan
    Niu, Jiacheng
    Tong, Wei
    Huang, Xu
    Li, Chen
    Zhong, Sheng
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 748 - 763
  • [39] 3DFed: Adaptive and Extensible Framework for Covert Backdoor Attack in Federated Learning
    Li, Haoyang
    Ye, Qingqing
    Hu, Haibo
    Li, Jin
    Wang, Leixia
    Fang, Chengfang
    Shi, Jie
    2023 IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP, 2023, : 1893 - 1907
  • [40] Evil vs evil: using adversarial examples to against backdoor attack in federated learning
    Tao Liu
    Mingjun Li
    Haibin Zheng
    Zhaoyan Ming
    Jinyin Chen
    Multimedia Systems, 2023, 29 : 553 - 568