FedDMC: Efficient and Robust Federated Learning via Detecting Malicious Clients

被引:7
|
作者
Mu, Xutong [1 ]
Cheng, Ke [1 ,2 ]
Shen, Yulong [1 ]
Li, Xiaoxiao [3 ]
Chang, Zhao [1 ]
Zhang, Tao [1 ]
Ma, Xindi [4 ]
机构
[1] Xidian Univ, Sch Comp Sci & Technol, Xian 710071, Shaanxi, Peoples R China
[2] Xian Univ Posts & Telecommun, Shaanxi Key Lab Informat Commun Network & Secur, Xian 710121, Shaanxi, Peoples R China
[3] Univ British Columbia, Elect & Comp Engn, V6T 1Z4 Vancouver, BC, Canada
[4] Xidian Univ, Sch Cyber Engn, Xian 710071, Shaanxi, Peoples R China
基金
中国国家自然科学基金;
关键词
Computational modeling; Federated learning; Data models; Servers; Robustness; Training; Aggregates; Clustering; federated learning; malicious clients; poisoning attack;
D O I
10.1109/TDSC.2024.3372634
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) has gained popularity in the field of machine learning, which allows multiple participants to collaboratively learn a highly-accurate global model without exposing their sensitive data. However, FL is susceptible to poisoning attacks, in which malicious clients manipulate local model parameters to corrupt the global model. Existing FL frameworks based on detecting malicious clients suffer from unreasonable assumptions (e.g., clean validation datasets) or fail to balance robustness and efficiency. To address these deficiencies, we propose FedDMC, which implements robust federated learning by efficiently and precisely detecting malicious clients. Specifically, FedDMC first applies principal component analysis to reduce the dimensionality of the model parameters, which retains the primary parameter feature and reduces the computational overhead for subsequent clustering. Then, a binary tree-based clustering method with noise is designed to eliminate the effect of noisy points in the clustering process, facilitating accurate and efficient malicious client detection. Finally, we design a self-ensemble detection correction module that utilizes historical results via exponential moving averages to improve the robustness of malicious client detection. Extensive experiments conducted on three benchmark datasets demonstrate that FedDMC outperforms state-of-the-art methods in terms of detection precision, global model accuracy, and computational complexity.
引用
收藏
页码:5259 / 5274
页数:16
相关论文
共 50 条
  • [1] FedPTA: Prior-Based Tensor Approximation for Detecting Malicious Clients in Federated Learning
    Mu, Xutong
    Cheng, Ke
    Liu, Teng
    Zhang, Tao
    Geng, Xueli
    Shen, Yulong
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 9100 - 9114
  • [2] FedGT: Identification of Malicious Clients in Federated Learning With Secure Aggregation
    Xhemrishi, Marvin
    Oestman, Johan
    Wachter-Zeh, Antonia
    Graell i Amat, Alexandre
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 2577 - 2592
  • [3] Robust Asymmetric Heterogeneous Federated Learning With Corrupted Clients
    Fang, Xiuwen
    Ye, Mang
    Du, Bo
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2025, 47 (04) : 2693 - 2705
  • [4] Noise-Robust Federated Learning With Model Heterogeneous Clients
    Fang, Xiuwen
    Ye, Mang
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2025, 24 (05) : 4053 - 4071
  • [5] FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients
    Zhang, Zaixi
    Cao, Xiaoyu
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 2545 - 2555
  • [6] Robust Federated Learning for Heterogeneous Clients and Unreliable Communications
    Wang, Ruyan
    Yang, Lan
    Tang, Tong
    Yang, Boran
    Wu, Dapeng
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (10) : 13440 - 13455
  • [7] Privacy-Preserving Federated Learning With Malicious Clients and Honest-but-Curious Servers
    Le, Junqing
    Zhang, Di
    Lei, Xinyu
    Jiao, Long
    Zeng, Kai
    Liao, Xiaofeng
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 4329 - 4344
  • [8] Edge model: An efficient method to identify and reduce the effectiveness of malicious clients in federated learning
    Shahraki, Mahdi
    Bidgoly, Amir Jalaly
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 157 : 459 - 468
  • [9] FedMUA: Exploring the Vulnerabilities of Federated Learning to Malicious Unlearning Attacks
    Chen, Jian
    Lin, Zehui
    Lin, Wanyu
    Shi, Wenlong
    Yin, Xiaoyan
    Wang, Di
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 1665 - 1678
  • [10] Using Third-Party Auditor to Help Federated Learning: An Efficient Byzantine-Robust Federated Learning
    Zhang, Zhuangzhuang
    Wu, Libing
    He, Debiao
    Li, Jianxin
    Lu, Na
    Wei, Xuejiang
    IEEE TRANSACTIONS ON SUSTAINABLE COMPUTING, 2024, 9 (06): : 848 - 861