FedPTA: Prior-Based Tensor Approximation for Detecting Malicious Clients in Federated Learning

被引:1
|
作者
Mu, Xutong [1 ]
Cheng, Ke [1 ,2 ]
Liu, Teng [1 ]
Zhang, Tao [1 ]
Geng, Xueli [3 ]
Shen, Yulong [1 ]
机构
[1] Xidian Univ, Sch Comp Sci & Technol, Xian 710071, Shaanxi, Peoples R China
[2] Xian Univ Posts & Telecommun, Key Lab Informat Commun Network & Secur, Xian 710121, Shaanxi, Peoples R China
[3] Xidian Univ, Sch Artificial Intelligence, Xian 710071, Shaanxi, Peoples R China
基金
中国国家自然科学基金;
关键词
Tensors; Computational modeling; Data models; Federated learning; Accuracy; Servers; Training; poisoning attack; malicious clients; prior-based tensor approximation;
D O I
10.1109/TIFS.2024.3451359
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated learning (FL) is vulnerable to poisoning attacks, where malicious clients tamper their model parameters to deteriorate the global model. Existing methods for defending against poisoning attacks primarily rely on identifying malicious clients, but struggle to balance robustness and efficiency. To address these issues, we propose FedPTA, a Prior-based Tensor Approximation (PTA) method. The core idea of FedPTA is to detect malicious clients in federated learning by leveraging inherent priors. This method initially innovatively defines multi-round model parameters as a three-dimensional tensor and unfolds it along different dimensions. Subsequently, three inherent priors - the similarity among benign clients, the continuity of multi-round client model parameters and the sparsity of malicious parameters, are integrated into a convex optimization framework. Through the optimization process, the optimal solutions for the background tensor and anomaly tensor are solved. Ultimately, the anomaly tensor is used to highlight the element-level features of malicious parameters, effectively distinguishing malicious clients. Evaluative studies supported by theoretical significance demonstrate the effectiveness of FedPTA, outperforming current state-of-the-art methods in terms of detection accuracy and computational efficiency.
引用
收藏
页码:9100 / 9114
页数:15
相关论文
共 33 条
  • [1] FedDMC: Efficient and Robust Federated Learning via Detecting Malicious Clients
    Mu, Xutong
    Cheng, Ke
    Shen, Yulong
    Li, Xiaoxiao
    Chang, Zhao
    Zhang, Tao
    Ma, Xindi
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (06) : 5259 - 5274
  • [2] FedGT: Identification of Malicious Clients in Federated Learning With Secure Aggregation
    Xhemrishi, Marvin
    Oestman, Johan
    Wachter-Zeh, Antonia
    Graell i Amat, Alexandre
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 2577 - 2592
  • [3] Privacy-Preserving Federated Learning With Malicious Clients and Honest-but-Curious Servers
    Le, Junqing
    Zhang, Di
    Lei, Xinyu
    Jiao, Long
    Zeng, Kai
    Liao, Xiaofeng
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 4329 - 4344
  • [4] FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients
    Zhang, Zaixi
    Cao, Xiaoyu
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 2545 - 2555
  • [5] How to cope with malicious federated learning clients: An unsupervised learning-based approach
    Onsu, Murat Arda
    Kantarci, Burak
    Boukerche, Azzedine
    COMPUTER NETWORKS, 2023, 234
  • [6] Federated learning secure model: A framework for malicious clients detection
    Kolasa, Dominik
    Pilch, Kinga
    Mazurczyk, Wojciech
    SOFTWAREX, 2024, 27
  • [7] Fake or Compromised? Making Sense of Malicious Clients in Federated Learning
    Mozaffari, Hamid
    Choudhary, Sunav
    Houmansadr, Amir
    COMPUTER SECURITY-ESORICS 2024, PT I, 2024, 14982 : 187 - 207
  • [8] Edge model: An efficient method to identify and reduce the effectiveness of malicious clients in federated learning
    Shahraki, Mahdi
    Bidgoly, Amir Jalaly
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 157 : 459 - 468
  • [9] FLUK: Protecting Federated Learning Against Malicious Clients for Internet of Vehicles
    Zhu, Mengde
    Ning, Wanyi
    Qi, Qi
    Wang, Jingyu
    Zhuang, Zirui
    Sun, Haifeng
    Huang, Jun
    Liao, Jianxin
    EURO-PAR 2024: PARALLEL PROCESSING, PART II, EURO-PAR 2024, 2024, 14802 : 454 - 469
  • [10] On the Impact of Malicious and Cooperative Clients on Validation Score-Based Model Aggregation for Federated Learning
    Oensue, Murat Arda
    Kantarci, Burak
    Boukerche, Azzedine
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 1634 - 1639