SARS: A Personalized Federated Learning Framework Towards Fairness and Robustness against Backdoor Attacks

被引:0
|
作者
Zhang, Webin [1 ]
Li, Youpeng [1 ]
An, Lingling [2 ]
Wan, Bo [2 ]
Wang, Xuyu [3 ]
机构
[1] XiDian Univ, Guangzhou Inst Technol, Guangzhou, Peoples R China
[2] Xidian Univ, Sch Comp Sci & Technol, Xian, Peoples R China
[3] Florida Int Univ, Knight Fdn, Sch Comp & Informat Sci, Miami, FL 33199 USA
来源
PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT | 2024年 / 8卷 / 04期
关键词
Federated Learning; Backdoor Attack; Attention Distillation; Fairness;
D O I
10.1145/3678571
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL), an emerging distributed machine learning framework that enables each client to collaboratively train a global model by sharing local knowledge without disclosing local private data, is vulnerable to backdoor model poisoning attacks. By compromising some users, the attacker manipulates their local training process, and uploads malicious gradient updates to poison the global model, resulting in the poisoned global model behaving abnormally on the sub-tasks specified by the malicious user. Prior research has proposed various strategies to mitigate backdoor attacks. However, existing FL backdoor defense methods affect the fairness of the FL system, while fair FL performance may not be robust. Motivated by these concerns, in this paper, we propose S elf-Awareness R evi S ion (SARS), a personalized FL framework designed to resist backdoor attacks and ensure the fairness of the FL system. SARS consists of two key modules: adaptation feature extraction and knowledge mapping. In the adaptation feature extraction module, benign users can adaptively extract clean global knowledge with self-awareness and self-revision of the backdoor knowledge transferred from the global model. Based on the previous module, users can effectively ensure the correct mapping of clean sample features and labels. Through extensive experimental results, SARS can defend against backdoor attacks and improve the fairness of the FL system by comparing several state-of-the-art FL backdoor defenses or fair FL methods, including FedAvg, Ditto, WeakDP, FoolsGold, and FLAME.
引用
收藏
页数:24
相关论文
共 50 条
  • [21] Distributed Backdoor Attacks in Federated Learning Generated by DynamicTriggers
    Wang, Jian
    Shen, Hong
    Liu, Xuehua
    Zhou, Hua
    Li, Yuli
    INFORMATION SECURITY THEORY AND PRACTICE, WISTP 2024, 2024, 14625 : 178 - 193
  • [22] Defending against Poisoning Backdoor Attacks on Federated Meta-learning
    Chen, Chien-Lun
    Babakniya, Sara
    Paolieri, Marco
    Golubchik, Leana
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (05)
  • [23] Scope: On Detecting Constrained Backdoor Attacks in Federated Learning
    Huang, Siquan
    Li, Yijiang
    Yan, Xingfu
    Gao, Ying
    Chen, Chong
    Shi, Leyu
    Chen, Biao
    Ng, Wing W. Y.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 3302 - 3315
  • [24] ANODYNE: Mitigating backdoor attacks in federated learning
    Gu, Zhipin
    Shi, Jiangyong
    Yang, Yuexiang
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 259
  • [25] BadVFL: Backdoor Attacks in Vertical Federated Learning
    Naseri, Mohammad
    Han, Yufei
    De Cristofaro, Emiliano
    45TH IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP 2024, 2024, : 2013 - 2028
  • [26] FMDL: Federated Mutual Distillation Learning for Defending Backdoor Attacks
    Sun, Hanqi
    Zhu, Wanquan
    Sun, Ziyu
    Cao, Mingsheng
    Liu, Wenbin
    ELECTRONICS, 2023, 12 (23)
  • [27] Identifying Backdoor Attacks in Federated Learning via Anomaly Detection
    Mi, Yuxi
    Sun, Yiheng
    Guan, Jihong
    Zhou, Shuigeng
    WEB AND BIG DATA, PT III, APWEB-WAIM 2023, 2024, 14333 : 111 - 126
  • [28] Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning
    Yang, Jie
    Zheng, Jun
    Wang, Haochen
    Li, Jiaxing
    Sun, Haipeng
    Han, Weifeng
    Jiang, Nan
    Tan, Yu-An
    SENSORS, 2023, 23 (03)
  • [29] CoBA: Collusive Backdoor Attacks With Optimized Trigger to Federated Learning
    Lyu, Xiaoting
    Han, Yufei
    Wang, Wei
    Liu, Jingkai
    Wang, Bin
    Chen, Kai
    Li, Yidong
    Liu, Jiqiang
    Zhang, Xiangliang
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2025, 22 (02) : 1506 - 1518
  • [30] Federated Learning for Generalization, Robustness, Fairness: A Survey and Benchmark
    Huang, Wenke
    Ye, Mang
    Shi, Zekun
    Wan, Guancheng
    Li, He
    Du, Bo
    Yang, Qiang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (12) : 9387 - 9406