FedGT: Identification of Malicious Clients in Federated Learning With Secure Aggregation

被引:0
作者
Xhemrishi, Marvin [1 ]
Oestman, Johan [2 ]
Wachter-Zeh, Antonia [1 ]
Graell i Amat, Alexandre [3 ]
机构
[1] Tech Univ Munich, TUM Sch Computat Informat & Technol, D-80333 Munich, Germany
[2] AI Sweden, S-41746 Gothenburg, Sweden
[3] Chalmers Univ Technol, Dept Elect Engn, S-41296 Gothenburg, Sweden
基金
瑞典研究理事会;
关键词
Vectors; Testing; Privacy; Security; Servers; Data models; Protocols; Training; Decoding; Aggregates; AI security; federated learning; group testing; malicious clients; poisoning attacks; privacy; secure aggregation; security; CODES;
D O I
10.1109/TIFS.2025.3539964
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated learning (FL) has emerged as a promising approach for collaboratively training machine learning models while preserving data privacy. Due to its decentralized nature, FL is vulnerable to poisoning attacks, where malicious clients compromise the global model through altered data or updates. Identifying such malicious clients is crucial for ensuring the integrity of FL systems. This task becomes particularly challenging under privacy-enhancing protocols such as secure aggregation, creating a fundamental trade-off between privacy and security. In this work, we propose FedGT, a novel framework designed to identify malicious clients in FL with secure aggregation while preserving privacy. Drawing inspiration from group testing, FedGT leverages overlapping groups of clients to identify the presence of malicious clients via a decoding operation. The clients identified as malicious are then removed from the model training, which is performed over the remaining clients. By choosing the size, number, and overlap between groups, FedGT strikes a balance between privacy and security. Specifically, the server learns the aggregated model of the clients in each group-vanilla federated learning and secure aggregation correspond to the extreme cases of FedGT with group size equal to one and the total number of clients, respectively. The effectiveness of FedGT is demonstrated through extensive experiments on three datasets in a cross-silo setting under different data-poisoning attacks. These experiments showcase FedGT's ability to identify malicious clients, resulting in high model utility. We further show that FedGT significantly outperforms the private robust aggregation approach based on the geometric median recently proposed by Pillutla et al. and the robust aggregation technique Multi-Krum in multiple settings.
引用
收藏
页码:2577 / 2592
页数:16
相关论文
共 54 条
[31]   Focal Loss for Dense Object Detection [J].
Lin, Tsung-Yi ;
Goyal, Priya ;
Girshick, Ross ;
He, Kaiming ;
Dollar, Piotr .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :2999-3007
[32]   Optimum Detection of Defective Elements in Non-Adaptive Group Testing [J].
Liva, Gianluigi ;
Paolini, Enrico ;
Chiani, Marco .
2021 55TH ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2021,
[33]  
LLOYD SP, 1982, IEEE T INFORM THEORY, V28, P129, DOI 10.1109/TIT.1982.1056489
[34]  
MacWilliams F. J., 1977, The theory of error-correcting codes. II
[35]   Untargeted Poisoning Attack Detection in Federated Learning via Behavior AttestationAl [J].
Mallah, Ranwa Al ;
Lopez, David ;
Badu-Marfo, Godwin ;
Farooq, Bilal .
IEEE ACCESS, 2023, 11 :125064-125079
[36]  
McMahan HB, 2017, PR MACH LEARN RES, V54, P1273
[37]  
Pan XD, 2020, PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, P1641
[38]  
Park Jongjin, 2021, PROC ADV NEURAL INF, V34
[39]   Robust Aggregation for Federated Learning [J].
Pillutla, Krishna ;
Kakade, Sham M. ;
Harchaoui, Zaid .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2022, 70 :1142-1154
[40]   SILHOUETTES - A GRAPHICAL AID TO THE INTERPRETATION AND VALIDATION OF CLUSTER-ANALYSIS [J].
ROUSSEEUW, PJ .
JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS, 1987, 20 :53-65