Improving (a, f)-Byzantine resilience in federated learning via layerwise aggregation and cosine distance

被引:0
作者
Garcia-Marquez, M. [1 ]
Rodriguez-Barroso, N. [1 ]
Luzon, M. V. [2 ]
Herrera, F. [1 ]
机构
[1] Univ Granada, Andalusian Res Inst Data Sci & Computat Intelligen, Dept Comp Sci & Artificial Intelligence, Granada 18071, Spain
[2] Univ Granada, Andalusian Res Inst Data Sci & Computat Intelligen, Dept Software Engn, Granada 18071, Spain
关键词
Federated learning; Robust aggregation; Machine learning; Krum; Adversarial attack;
D O I
10.1016/j.knosys.2025.114004
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The rapid development of artificial intelligence systems has amplified societal concerns regarding their usage, necessitating regulatory frameworks that encompass data privacy. Federated learning (FL) is proposed as a potential solution to the challenges of data privacy in distributed machine learning by enabling collaborative model training without data sharing. However, FL systems remain vulnerable to Byzantine attacks, where malicious nodes contribute corrupted model updates. Although Byzantine resilient rules have emerged as a widely adopted robust aggregation algorithm to mitigate these attacks, their effectiveness drops significantly in high-dimensional parameter spaces, sometimes leading to poor-performing models. This paper introduces Layerwise Cosine Aggregation, a novel aggregation scheme designed to enhance the robustness of these rules in such high-dimensional settings while preserving computational efficiency. A theoretical analysis is presented, demonstrating the superior robustness of the proposed Layerwise Cosine Aggregation compared to the original robust aggregation rules. Empirical evaluation in diverse image classification datasets, under varying data distributions and Byzantine attack scenarios, consistently demonstrates the improved performance of Layerwise Cosine Aggregation, achieving up to a 16% increase in model accuracy.
引用
收藏
页数:16
相关论文
共 34 条
[1]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[2]  
Baruch M., 2019, A Little Is Enough: Circumventing Defenses for Distributed Learning
[3]  
Blanchard P., 2017, P 31 INT C NEURAL IN, P118
[4]  
Bottou L., 1998, Online learning and stochastic approximations
[5]  
C. of European Union, 2021, Regulation of the European parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts
[6]  
Cao XY, 2022, Arxiv, DOI [arXiv:2012.13995, DOI 10.48550/ARXIV.2012.13995]
[7]  
Cohen G, 2017, IEEE IJCNN, P2921, DOI 10.1109/IJCNN.2017.7966217
[8]  
El Mhamdi E. M., 2018, INT C MACHINE LEARNI, P3521
[9]  
European Parliament, Regulation (EU) 2016/679 of the European parliament and of the Council, Council of the European Union
[10]   FLEX: Flexible Federated Learning Framework [J].
Herrera, F. ;
Jimenez-Lopez, D. ;
Argente-Garrido, A. ;
Rodriguez-Barroso, N. ;
Zuheros, C. ;
Aguilera-Martos, I. ;
Bello, B. ;
Garcia-Marquez, M. ;
Luzon, M. V. .
INFORMATION FUSION, 2025, 117