CoBA: Collusive Backdoor Attacks With Optimized Trigger to Federated Learning

被引:0
|
作者
Lyu, Xiaoting [1 ]
Han, Yufei [2 ]
Wang, Wei [3 ,4 ]
Liu, Jingkai [1 ]
Wang, Bin [5 ]
Chen, Kai [6 ]
Li, Yidong [1 ]
Liu, Jiqiang [1 ]
Zhang, Xiangliang [7 ]
机构
[1] Beijing Jiaotong Univ, Beijing Key Lab Secur & Privacy Intelligent Transp, Beijing 100044, Peoples R China
[2] INRIA, F-35042 Rennes, Bretagne, France
[3] Xi An Jiao Tong Univ, Minist Educ Key Lab Intelligent Networks & Network, Xian 710049, Peoples R China
[4] Beijing Jiaotong Univ, Beijing 100044, Peoples R China
[5] Zhejiang Key Lab Artificial Intelligence Things AI, Hangzhou 310053, Peoples R China
[6] Chinese Acad Sci, Inst Informat Engn, State Key Lab Informat Secur, Beijing 100093, Peoples R China
[7] Univ Notre Dame, Dept Comp Sci & Engn, Notre Dame, IN 46556 USA
基金
北京市自然科学基金;
关键词
Data models; Training; Servers; Computational modeling; Adaptation models; Federated learning; Training data; backdoor attack;
D O I
10.1109/TDSC.2024.3445637
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Considerable efforts have been devoted to addressing distributed backdoor attacks in federated learning (FL) systems. While significant progress has been made in enhancing the security of FL systems, our study reveals that there remains a false sense of security surrounding FL. We demonstrate that colluding malicious participants can effectively execute backdoor attacks during the FL training process, exhibiting high sparsity and stealthiness, which means they can evade common defense methods with only a few attack iterations. Our research highlights this vulnerability by proposing a Collusive Backdoor Attack named CoBA. CoBA is designed to enhance the sparsity and stealthiness of backdoor attacks by offering trigger tuning to facilitate learning of backdoor training data, controlling the bias of malicious local model updates, and applying the projected gradient descent technique. By conducting extensive empirical studies on 5 benchmark datasets, we make the following observations: 1) CoBA successfully circumvents 15 state-of-the-art defense methods for robust FL; 2) Compared to existing backdoor attacks, CoBA consistently achieves superior attack performance; and 3) CoBA can achieve persistent poisoning effects through significantly sparse attack iterations. These findings raise substantial concerns regarding the integrity of FL and underscore the urgent need for heightened vigilance in defending against such attacks.
引用
收藏
页码:1506 / 1518
页数:13
相关论文
共 50 条
  • [1] Collusive Backdoor Attacks in Federated Learning Frameworks for IoT Systems
    Alharbi, Saier
    Guo, Yifan
    Yu, Wei
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (11): : 19694 - 19707
  • [2] Unlearning Backdoor Attacks in Federated Learning
    Wu, Chen
    Zhu, Sencun
    Mitra, Prasenjit
    Wang, Wei
    2024 IEEE CONFERENCE ON COMMUNICATIONS AND NETWORK SECURITY, CNS 2024, 2024,
  • [3] Optimally Mitigating Backdoor Attacks in Federated Learning
    Walter, Kane
    Mohammady, Meisam
    Nepal, Surya
    Kanhere, Salil S.
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 2949 - 2963
  • [4] ANODYNE: Mitigating backdoor attacks in federated learning
    Gu, Zhipin
    Shi, Jiangyong
    Yang, Yuexiang
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 259
  • [5] BadVFL: Backdoor Attacks in Vertical Federated Learning
    Naseri, Mohammad
    Han, Yufei
    De Cristofaro, Emiliano
    45TH IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP 2024, 2024, : 2013 - 2028
  • [6] An Investigation of Recent Backdoor Attacks and Defenses in Federated Learning
    Chen, Qiuxian
    Tao, Yizheng
    2023 EIGHTH INTERNATIONAL CONFERENCE ON FOG AND MOBILE EDGE COMPUTING, FMEC, 2023, : 262 - 269
  • [7] Distributed Backdoor Attacks in Federated Learning Generated by DynamicTriggers
    Wang, Jian
    Shen, Hong
    Liu, Xuehua
    Zhou, Hua
    Li, Yuli
    INFORMATION SECURITY THEORY AND PRACTICE, WISTP 2024, 2024, 14625 : 178 - 193
  • [8] Scope: On Detecting Constrained Backdoor Attacks in Federated Learning
    Huang, Siquan
    Li, Yijiang
    Yan, Xingfu
    Gao, Ying
    Chen, Chong
    Shi, Leyu
    Chen, Biao
    Ng, Wing W. Y.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 3302 - 3315
  • [9] Towards defending adaptive backdoor attacks in Federated Learning
    Yang, Han
    Gu, Dongbing
    He, Jianhua
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 5078 - 5084
  • [10] Backdoor Attacks in Peer-to-Peer Federated Learning
    Syros, Georgios
    Yar, Gokberk
    Boboila, Simona
    Nita-Rotaru, Cristina
    Oprea, Alina
    ACM TRANSACTIONS ON PRIVACY AND SECURITY, 2025, 28 (01)