CoBA: Collusive Backdoor Attacks With Optimized Trigger to Federated Learning

被引:0
|
作者
Lyu, Xiaoting [1 ]
Han, Yufei [2 ]
Wang, Wei [3 ,4 ]
Liu, Jingkai [1 ]
Wang, Bin [5 ]
Chen, Kai [6 ]
Li, Yidong [1 ]
Liu, Jiqiang [1 ]
Zhang, Xiangliang [7 ]
机构
[1] Beijing Jiaotong Univ, Beijing Key Lab Secur & Privacy Intelligent Transp, Beijing 100044, Peoples R China
[2] INRIA, F-35042 Rennes, Bretagne, France
[3] Xi An Jiao Tong Univ, Minist Educ Key Lab Intelligent Networks & Network, Xian 710049, Peoples R China
[4] Beijing Jiaotong Univ, Beijing 100044, Peoples R China
[5] Zhejiang Key Lab Artificial Intelligence Things AI, Hangzhou 310053, Peoples R China
[6] Chinese Acad Sci, Inst Informat Engn, State Key Lab Informat Secur, Beijing 100093, Peoples R China
[7] Univ Notre Dame, Dept Comp Sci & Engn, Notre Dame, IN 46556 USA
基金
北京市自然科学基金;
关键词
Data models; Training; Servers; Computational modeling; Adaptation models; Federated learning; Training data; backdoor attack;
D O I
10.1109/TDSC.2024.3445637
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Considerable efforts have been devoted to addressing distributed backdoor attacks in federated learning (FL) systems. While significant progress has been made in enhancing the security of FL systems, our study reveals that there remains a false sense of security surrounding FL. We demonstrate that colluding malicious participants can effectively execute backdoor attacks during the FL training process, exhibiting high sparsity and stealthiness, which means they can evade common defense methods with only a few attack iterations. Our research highlights this vulnerability by proposing a Collusive Backdoor Attack named CoBA. CoBA is designed to enhance the sparsity and stealthiness of backdoor attacks by offering trigger tuning to facilitate learning of backdoor training data, controlling the bias of malicious local model updates, and applying the projected gradient descent technique. By conducting extensive empirical studies on 5 benchmark datasets, we make the following observations: 1) CoBA successfully circumvents 15 state-of-the-art defense methods for robust FL; 2) Compared to existing backdoor attacks, CoBA consistently achieves superior attack performance; and 3) CoBA can achieve persistent poisoning effects through significantly sparse attack iterations. These findings raise substantial concerns regarding the integrity of FL and underscore the urgent need for heightened vigilance in defending against such attacks.
引用
收藏
页码:1506 / 1518
页数:13
相关论文
共 50 条
  • [21] RoPE: Defending against backdoor attacks in federated learning systems
    Wang, Yongkang
    Zhai, Di-Hua
    Xia, Yuanqing
    KNOWLEDGE-BASED SYSTEMS, 2024, 293
  • [22] DEFENDING AGAINST BACKDOOR ATTACKS IN FEDERATED LEARNING WITH DIFFERENTIAL PRIVACY
    Miao, Lu
    Yang, Wei
    Hu, Rong
    Li, Lu
    Huang, Liusheng
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2999 - 3003
  • [23] BADFSS: Backdoor Attacks on Federated Self-Supervised Learning
    Zhang, Jiale
    Zhu, Chengcheng
    Di Wu
    Sun, Xiaobing
    Yong, Jianming
    Long, Guodong
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 548 - 558
  • [24] Practical and General Backdoor Attacks Against Vertical Federated Learning
    Xuan, Yuexin
    Chen, Xiaojun
    Zhao, Zhendong
    Tang, Bisheng
    Dong, Ye
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT II, 2023, 14170 : 402 - 417
  • [25] Defending Against Data and Model Backdoor Attacks in Federated Learning
    Wang, Hao
    Mu, Xuejiao
    Wang, Dong
    Xu, Qiang
    Li, Kaiju
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (24): : 39276 - 39294
  • [26] Adaptive Backdoor Attacks Against Dataset Distillation for Federated Learning
    Chai, Ze
    Gao, Zhipeng
    Lin, Yijing
    Zhao, Chen
    Yu, Xinlei
    Xie, Zhiqiang
    ICC 2024 - IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2024, : 4614 - 4619
  • [27] CRFL: Certifiably Robust Federated Learning against Backdoor Attacks
    Xie, Chulin
    Chen, Minghao
    Chen, Pin-Yu
    Li, Bo
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [28] ICT: Invisible Computable Trigger Backdoor Attacks in Transfer Learning
    Chen, Xiang
    Liu, Bo
    Zhao, Shaofeng
    Liu, Ming
    Xu, Hui
    Li, Zhanbo
    Zheng, Zhigao
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (04) : 6747 - 6758
  • [29] Knowledge Distillation Based Defense for Audio Trigger Backdoor in Federated Learning
    Chen, Yu-Wen
    Ke, Bo-Hsu
    Chen, Bo-Zhong
    Chiu, Si-Rong
    Tu, Chun-Wei
    Kuo, Jian-Jhih
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 4271 - 4276
  • [30] Successive Interference Cancellation Based Defense for Trigger Backdoor in Federated Learning
    Chen, Yu-Wen
    Ke, Bo-Hsu
    Chen, Bo-Zhong
    Chiu, Si-Rong
    Tu, Chun-Wei
    Kuo, Jian-Jhih
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 26 - 32