CoBA: Collusive Backdoor Attacks With Optimized Trigger to Federated Learning

被引:0
|
作者
Lyu, Xiaoting [1 ]
Han, Yufei [2 ]
Wang, Wei [3 ,4 ]
Liu, Jingkai [1 ]
Wang, Bin [5 ]
Chen, Kai [6 ]
Li, Yidong [1 ]
Liu, Jiqiang [1 ]
Zhang, Xiangliang [7 ]
机构
[1] Beijing Jiaotong Univ, Beijing Key Lab Secur & Privacy Intelligent Transp, Beijing 100044, Peoples R China
[2] INRIA, F-35042 Rennes, Bretagne, France
[3] Xi An Jiao Tong Univ, Minist Educ Key Lab Intelligent Networks & Network, Xian 710049, Peoples R China
[4] Beijing Jiaotong Univ, Beijing 100044, Peoples R China
[5] Zhejiang Key Lab Artificial Intelligence Things AI, Hangzhou 310053, Peoples R China
[6] Chinese Acad Sci, Inst Informat Engn, State Key Lab Informat Secur, Beijing 100093, Peoples R China
[7] Univ Notre Dame, Dept Comp Sci & Engn, Notre Dame, IN 46556 USA
基金
北京市自然科学基金;
关键词
Data models; Training; Servers; Computational modeling; Adaptation models; Federated learning; Training data; backdoor attack;
D O I
10.1109/TDSC.2024.3445637
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Considerable efforts have been devoted to addressing distributed backdoor attacks in federated learning (FL) systems. While significant progress has been made in enhancing the security of FL systems, our study reveals that there remains a false sense of security surrounding FL. We demonstrate that colluding malicious participants can effectively execute backdoor attacks during the FL training process, exhibiting high sparsity and stealthiness, which means they can evade common defense methods with only a few attack iterations. Our research highlights this vulnerability by proposing a Collusive Backdoor Attack named CoBA. CoBA is designed to enhance the sparsity and stealthiness of backdoor attacks by offering trigger tuning to facilitate learning of backdoor training data, controlling the bias of malicious local model updates, and applying the projected gradient descent technique. By conducting extensive empirical studies on 5 benchmark datasets, we make the following observations: 1) CoBA successfully circumvents 15 state-of-the-art defense methods for robust FL; 2) Compared to existing backdoor attacks, CoBA consistently achieves superior attack performance; and 3) CoBA can achieve persistent poisoning effects through significantly sparse attack iterations. These findings raise substantial concerns regarding the integrity of FL and underscore the urgent need for heightened vigilance in defending against such attacks.
引用
收藏
页码:1506 / 1518
页数:13
相关论文
共 50 条
  • [41] Coordinated Backdoor Attacks against Federated Learning with Model-Dependent Triggers
    Gong, Xueluan
    Chen, Yanjiao
    Huang, Huayang
    Liao, Yuqing
    Wang, Shuai
    Wang, Qian
    IEEE NETWORK, 2022, 36 (01): : 84 - 90
  • [42] FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning
    Jia, Jinyuan
    Yuan, Zhuowen
    Sahabandu, Dinuka
    Niu, Luyao
    Rajabi, Arezoo
    Ramasubramanian, Bhaskar
    Li, Bo
    Poovendran, Radha
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [43] FLEDGE: Ledger-based Federated Learning Resilient to Inference and Backdoor Attacks
    Castillo, Jorge
    Rieger, Phillip
    Fereidooni, Hossein
    Chen, Qian
    Sadeghi, Ahmad-Reza
    39TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2023, 2023, : 647 - 661
  • [44] Backdoor attacks and defenses in federated learning: Survey, challenges and future research directions
    Nguyen, Thuy Dung
    Nguyen, Tuan
    Nguyen, Phi Le
    Pham, Hieu H.
    Doan, Khoa D.
    Wong, Kok-Seng
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 127
  • [45] Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning
    Yang, Jie
    Zheng, Jun
    Wang, Haochen
    Li, Jiaxing
    Sun, Haipeng
    Han, Weifeng
    Jiang, Nan
    Tan, Yu-An
    SENSORS, 2023, 23 (03)
  • [46] Resisting Distributed Backdoor Attacks in Federated Learning: A Dynamic Norm Clipping Approach
    Guo, Yifan
    Wang, Qianlong
    Ji, Tianxi
    Wang, Xufei
    Li, Pan
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 1172 - 1182
  • [47] How To Backdoor Federated Learning
    Bagdasaryan, Eugene
    Veit, Andreas
    Hua, Yiqing
    Estrin, Deborah
    Shmatikov, Vitaly
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 2938 - 2947
  • [48] Efficient and persistent backdoor attack by boundary trigger set constructing against federated learning
    Yang, Deshan
    Luo, Senlin
    Zhou, Jinjie
    Pan, Limin
    Yang, Xiaonan
    Xing, Jiyuan
    INFORMATION SCIENCES, 2023, 651
  • [49] SARS: A Personalized Federated Learning Framework Towards Fairness and Robustness against Backdoor Attacks
    Zhang, Webin
    Li, Youpeng
    An, Lingling
    Wan, Bo
    Wang, Xuyu
    PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2024, 8 (04):
  • [50] Invariant Aggregator for Defending against Federated Backdoor Attacks
    Wang, Xiaoyang
    Dimitriadis, Dimitrios
    Koyejo, Sanmi
    Tople, Shruti
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238