FedMUA: Exploring the Vulnerabilities of Federated Learning to Malicious Unlearning Attacks

被引:0
|
作者
Chen, Jian [1 ]
Lin, Zehui [1 ]
Lin, Wanyu [1 ,2 ]
Shi, Wenlong [3 ]
Yin, Xiaoyan [4 ]
Wang, Di [5 ]
机构
[1] Hong Kong Polytech Univ, Dept Data Sci & Artificial Intelligence, Hong Kong, Peoples R China
[2] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
[3] Huazhong Univ Sci & Technol, Sch Elect Informat & Commun, Wuhan 430074, Peoples R China
[4] Northwest Univ, Sch Informat Sci & Technol, Xian 710069, Peoples R China
[5] King Abdullah Univ Sci & Technol, Div Comp Elect & Math Sci & Engn, Thuwal 23955, Saudi Arabia
关键词
Predictive models; Data models; Servers; Federated learning; Computational modeling; Training; Training data; Robustness; General Data Protection Regulation; Distributed databases; unlearning attacks; targeted attacks;
D O I
10.1109/TIFS.2025.3531141
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Recently, the practical needs of "the right to be forgotten" in federated learning gave birth to a paradigm known as federated unlearning, which enables the server to forget personal data upon the client's removal request. Existing studies on federated unlearning have primarily focused on efficiently eliminating the influence of requested data from the client's model without retraining from scratch, however, they have rarely doubted the reliability of the global model posed by the discrepancy between its prediction performance before and after unlearning. To bridge this gap, we take the first step by introducing a novel malicious unlearning attack dubbed FedMUA, aiming to unveil potential vulnerabilities emerging from federated learning during the unlearning process. Specifically, clients may act as attackers by crafting malicious unlearning requests to manipulate the prediction behavior of the global model. The crux of FedMUA is to mislead the global model into unlearning more information associated with the influential samples for the target sample than anticipated, thus inducing adverse effects on target samples from other clients. To achieve this, we design a novel two-step method, known as Influential Sample Identification and Malicious Unlearning Generation, to identify and subsequently generate malicious feature unlearning requests within the influential samples. By doing so, we can significantly alter the predictions pertaining to the target sample by initiating the malicious feature unlearning requests, leading to the deliberate manipulation for the user adversely. Additionally, we design a new defense mechanism that is highly resilient against malicious unlearning attacks. Extensive experiments on three realistic datasets reveal that FedMUA effectively induces misclassification on target samples and can achieve an 80% attack success rate by triggering only 0.3% malicious unlearning requests.
引用
收藏
页码:1665 / 1678
页数:14
相关论文
共 50 条
  • [1] Vulnerabilities in Federated Learning
    Bouacida, Nader
    Mohapatra, Prasant
    IEEE ACCESS, 2021, 9 : 63229 - 63249
  • [2] FAST: Adopting Federated Unlearning to Eliminating Malicious Terminals at Server Side
    Guo, Xintong
    Wang, Pengfei
    Qiu, Sen
    Song, Wei
    Zhang, Qiang
    Wei, Xiaopeng
    Zhou, Dongsheng
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2024, 11 (02): : 2289 - 2302
  • [3] FLCert: Provably Secure Federated Learning Against Poisoning Attacks
    Cao, Xiaoyu
    Zhang, Zaixi
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 3691 - 3705
  • [4] Federated Learning Under Attack: Exposing Vulnerabilities Through Data Poisoning Attacks in Computer Networks
    Nowroozi, Ehsan
    Haider, Imran
    Taheri, Rahim
    Conti, Mauro
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2025, 22 (01): : 822 - 831
  • [5] Trustworthy Federated Learning Against Malicious Attacks in Web 3.0
    Yuan, Zheng
    Tian, Youliang
    Zhou, Zhou
    Li, Ta
    Wang, Shuai
    Xiong, Jinbo
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2024, 11 (05): : 3969 - 3982
  • [6] Evaluating Security and Robustness for Split Federated Learning Against Poisoning Attacks
    Wu, Xiaodong
    Yuan, Henry
    Li, Xiangman
    Ni, Jianbing
    Lu, Rongxing
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 175 - 190
  • [7] FedDMC: Efficient and Robust Federated Learning via Detecting Malicious Clients
    Mu, Xutong
    Cheng, Ke
    Shen, Yulong
    Li, Xiaoxiao
    Chang, Zhao
    Zhang, Tao
    Ma, Xindi
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (06) : 5259 - 5274
  • [8] CoBA: Collusive Backdoor Attacks With Optimized Trigger to Federated Learning
    Lyu, Xiaoting
    Han, Yufei
    Wang, Wei
    Liu, Jingkai
    Wang, Bin
    Chen, Kai
    Li, Yidong
    Liu, Jiqiang
    Zhang, Xiangliang
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2025, 22 (02) : 1506 - 1518
  • [9] Source Inference Attacks: Beyond Membership Inference Attacks in Federated Learning
    Hu, Hongsheng
    Zhang, Xuyun
    Salcic, Zoran
    Sun, Lichao
    Choo, Kim-Kwang Raymond
    Dobbie, Gillian
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 3012 - 3029
  • [10] Hier-FUN: Hierarchical Federated Learning and Unlearning in Heterogeneous Edge Computing
    Ma, Zhenguo
    Tu, Huaqing
    Zhou, Li
    Ji, Pengli
    Yan, Xiaoran
    Xu, Hongli
    Wang, Zhiyuan
    Chen, Suo
    IEEE INTERNET OF THINGS JOURNAL, 2025, 12 (07): : 8653 - 8668