CoBA: Collusive Backdoor Attacks With Optimized Trigger to Federated Learning

被引:0
|
作者
Lyu, Xiaoting [1 ]
Han, Yufei [2 ]
Wang, Wei [3 ,4 ]
Liu, Jingkai [1 ]
Wang, Bin [5 ]
Chen, Kai [6 ]
Li, Yidong [1 ]
Liu, Jiqiang [1 ]
Zhang, Xiangliang [7 ]
机构
[1] Beijing Jiaotong Univ, Beijing Key Lab Secur & Privacy Intelligent Transp, Beijing 100044, Peoples R China
[2] INRIA, F-35042 Rennes, Bretagne, France
[3] Xi An Jiao Tong Univ, Minist Educ Key Lab Intelligent Networks & Network, Xian 710049, Peoples R China
[4] Beijing Jiaotong Univ, Beijing 100044, Peoples R China
[5] Zhejiang Key Lab Artificial Intelligence Things AI, Hangzhou 310053, Peoples R China
[6] Chinese Acad Sci, Inst Informat Engn, State Key Lab Informat Secur, Beijing 100093, Peoples R China
[7] Univ Notre Dame, Dept Comp Sci & Engn, Notre Dame, IN 46556 USA
基金
北京市自然科学基金;
关键词
Data models; Training; Servers; Computational modeling; Adaptation models; Federated learning; Training data; backdoor attack;
D O I
10.1109/TDSC.2024.3445637
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Considerable efforts have been devoted to addressing distributed backdoor attacks in federated learning (FL) systems. While significant progress has been made in enhancing the security of FL systems, our study reveals that there remains a false sense of security surrounding FL. We demonstrate that colluding malicious participants can effectively execute backdoor attacks during the FL training process, exhibiting high sparsity and stealthiness, which means they can evade common defense methods with only a few attack iterations. Our research highlights this vulnerability by proposing a Collusive Backdoor Attack named CoBA. CoBA is designed to enhance the sparsity and stealthiness of backdoor attacks by offering trigger tuning to facilitate learning of backdoor training data, controlling the bias of malicious local model updates, and applying the projected gradient descent technique. By conducting extensive empirical studies on 5 benchmark datasets, we make the following observations: 1) CoBA successfully circumvents 15 state-of-the-art defense methods for robust FL; 2) Compared to existing backdoor attacks, CoBA consistently achieves superior attack performance; and 3) CoBA can achieve persistent poisoning effects through significantly sparse attack iterations. These findings raise substantial concerns regarding the integrity of FL and underscore the urgent need for heightened vigilance in defending against such attacks.
引用
收藏
页码:1506 / 1518
页数:13
相关论文
共 50 条
  • [31] MITDBA: Mitigating Dynamic Backdoor Attacks in Federated Learning for IoT Applications
    Wang, Yongkang
    Zhai, Di-Hua
    Han, Dongyu
    Guan, Yuyin
    Xia, Yuanqing
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (06): : 10115 - 10132
  • [32] FedMC: Federated Learning with Mode Connectivity Against Distributed Backdoor Attacks
    Wang, Weiqi
    Zhang, Chenhan
    Liu, Shushu
    Tang, Mingjian
    Liu, An
    Yu, Shui
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 4873 - 4878
  • [33] Defending against Poisoning Backdoor Attacks on Federated Meta-learning
    Chen, Chien-Lun
    Babakniya, Sara
    Paolieri, Marco
    Golubchik, Leana
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (05)
  • [34] SCFL: Mitigating backdoor attacks in federated learning based on SVD and clustering 
    Wang, Yongkang
    Zhai, Di-Hua
    Xia, Yuanqing
    COMPUTERS & SECURITY, 2023, 133
  • [35] An adaptive robust defending algorithm against backdoor attacks in federated learning
    Wang, Yongkang
    Zhai, Di-Hua
    He, Yongping
    Xia, Yuanqing
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 143 : 118 - 131
  • [36] A3FL: Adversarially Adaptive Backdoor Attacks to Federated Learning
    Zhang, Hangfan
    Jia, Jinyuan
    Chen, Jinghui
    Lin, Lu
    Wu, Dinghao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36, NEURIPS 2023, 2023,
  • [37] FederatedReverse: A Detection and Defense Method Against Backdoor Attacks in Federated Learning
    Zhao, Chen
    Wen, Yu
    Li, Shuailou
    Liu, Fucheng
    Meng, Dan
    PROCEEDINGS OF THE 2021 ACM WORKSHOP ON INFORMATION HIDING AND MULTIMEDIA SECURITY, IH&MMSEC 2021, 2021, : 51 - 62
  • [38] Never Too Late: Tracing and Mitigating Backdoor Attacks in Federated Learning
    Zeng, Hui
    Zhou, Tongqing
    Wu, Xinyi
    Cai, Zhiping
    2022 41ST INTERNATIONAL SYMPOSIUM ON RELIABLE DISTRIBUTED SYSTEMS (SRDS 2022), 2022, : 69 - 81
  • [39] BadCLIP: Trigger-Aware Prompt Learning for Backdoor Attacks on CLIP
    Bail, Jiawang
    Gao, Kuofeng
    Min, Shaobo
    Xia, Shu-Tao
    Li, Zhifeng
    Liu, Wei
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 24239 - 24250
  • [40] Resisting Backdoor Attacks in Federated Learning via Bidirectional Elections and Individual Perspective
    Qin, Zhen
    Chen, Feiyi
    Zhi, Chen
    Yan, Xueqiang
    Deng, Shuiguang
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 13, 2024, : 14677 - 14685