FedMUA: Exploring the Vulnerabilities of Federated Learning to Malicious Unlearning Attacks

被引:0
|
作者
Chen, Jian [1 ]
Lin, Zehui [1 ]
Lin, Wanyu [1 ,2 ]
Shi, Wenlong [3 ]
Yin, Xiaoyan [4 ]
Wang, Di [5 ]
机构
[1] Hong Kong Polytech Univ, Dept Data Sci & Artificial Intelligence, Hong Kong, Peoples R China
[2] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
[3] Huazhong Univ Sci & Technol, Sch Elect Informat & Commun, Wuhan 430074, Peoples R China
[4] Northwest Univ, Sch Informat Sci & Technol, Xian 710069, Peoples R China
[5] King Abdullah Univ Sci & Technol, Div Comp Elect & Math Sci & Engn, Thuwal 23955, Saudi Arabia
关键词
Predictive models; Data models; Servers; Federated learning; Computational modeling; Training; Training data; Robustness; General Data Protection Regulation; Distributed databases; unlearning attacks; targeted attacks;
D O I
10.1109/TIFS.2025.3531141
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Recently, the practical needs of "the right to be forgotten" in federated learning gave birth to a paradigm known as federated unlearning, which enables the server to forget personal data upon the client's removal request. Existing studies on federated unlearning have primarily focused on efficiently eliminating the influence of requested data from the client's model without retraining from scratch, however, they have rarely doubted the reliability of the global model posed by the discrepancy between its prediction performance before and after unlearning. To bridge this gap, we take the first step by introducing a novel malicious unlearning attack dubbed FedMUA, aiming to unveil potential vulnerabilities emerging from federated learning during the unlearning process. Specifically, clients may act as attackers by crafting malicious unlearning requests to manipulate the prediction behavior of the global model. The crux of FedMUA is to mislead the global model into unlearning more information associated with the influential samples for the target sample than anticipated, thus inducing adverse effects on target samples from other clients. To achieve this, we design a novel two-step method, known as Influential Sample Identification and Malicious Unlearning Generation, to identify and subsequently generate malicious feature unlearning requests within the influential samples. By doing so, we can significantly alter the predictions pertaining to the target sample by initiating the malicious feature unlearning requests, leading to the deliberate manipulation for the user adversely. Additionally, we design a new defense mechanism that is highly resilient against malicious unlearning attacks. Extensive experiments on three realistic datasets reveal that FedMUA effectively induces misclassification on target samples and can achieve an 80% attack success rate by triggering only 0.3% malicious unlearning requests.
引用
收藏
页码:1665 / 1678
页数:14
相关论文
共 50 条
  • [41] Robust Federated Learning: Maximum Correntropy Aggregation Against Byzantine Attacks
    Luan, Zhirong
    Li, Wenrui
    Liu, Meiqin
    Chen, Badong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (01) : 62 - 75
  • [42] D2MIF: A Malicious Model Detection Mechanism for Federated-Learning-Empowered Artificial Intelligence of Things
    Liu, Wenxin
    Lin, Hui
    Wang, Xiaoding
    Hu, Jia
    Kaddoum, Georges
    Piran, Md. Jalil
    Alamri, Atif
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (03) : 2141 - 2151
  • [43] FedXPro: Bayesian Inference for Mitigating Poisoning Attacks in IoT Federated Learning
    Indrasiri, Pubudu L.
    Nguyen, Dinh C.
    Kashyap, Bipasha
    Pathirana, Pubudu N.
    Eldar, Yonina C.
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (07) : 12115 - 12131
  • [44] Against network attacks in renewable power plants: Malicious behavior defense for federated learning
    Wu, Xiaodong
    Jin, Zhigang
    Zhou, Junyi
    Liu, Kai
    Liu, Zepei
    COMPUTER NETWORKS, 2024, 250
  • [45] Blockchain Enabled Federated Learning for Detection of Malicious Internet of Things Nodes
    Alami, Rachid
    Biswas, Anjanava
    Shinde, Varun
    Almogren, Ahmad
    Rehman, Ateeq Ur
    Shaikh, Tahseen
    IEEE ACCESS, 2024, 12 : 188174 - 188185
  • [46] Incentive Mechanism Design for Federated Learning and Unlearning
    Ding, Ningning
    Sun, Zhenyu
    Wei, Ermin
    Berry, Randall
    PROCEEDINGS OF THE 2023 INTERNATIONAL SYMPOSIUM ON THEORY, ALGORITHMIC FOUNDATIONS, AND PROTOCOL DESIGN FOR MOBILE NETWORKS AND MOBILE COMPUTING, MOBIHOC 2023, 2023, : 11 - 20
  • [47] Efficient and Secure Federated Learning Against Backdoor Attacks
    Miao, Yinbin
    Xie, Rongpeng
    Li, Xinghua
    Liu, Zhiquan
    Choo, Kim-Kwang Raymond
    Deng, Robert H.
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (05) : 4619 - 4636
  • [48] A Verifiable Privacy-Preserving Federated Learning Framework Against Collusion Attacks
    Chen, Yange
    He, Suyu
    Wang, Baocang
    Feng, Zhanshen
    Zhu, Guanghui
    Tian, Zhihong
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2025, 24 (05) : 3918 - 3934
  • [49] Sine: Similarity is Not Enough for Mitigating Local Model Poisoning Attacks in Federated Learning
    Kasyap, Harsh
    Tripathy, Somanath
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (05) : 4481 - 4494
  • [50] A Federated Learning Framework for Detecting False Data Injection Attacks in Solar Farms
    Zhao, Liang
    Li, Jiaming
    Li, Qi
    Li, Fangyu
    IEEE TRANSACTIONS ON POWER ELECTRONICS, 2022, 37 (03) : 2496 - 2501