FedCAP: Robust Federated Learning via Customized Aggregation and Personalization

被引:0
作者
Li, Youpeng [1 ]
Wang, Xinda [1 ]
Yu, Fuxun [2 ]
Sun, Lichao [3 ]
Zhang, Wenbin [4 ]
Wang, Xuyu [4 ]
机构
[1] Univ Texas Dallas, Richardson, TX USA
[2] Microsoft, Redmond, WA USA
[3] Lehigh Univ, Bethlehem, PA USA
[4] Florida Int Univ, Miami, FL 33199 USA
来源
2024 ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC | 2024年
基金
美国国家科学基金会;
关键词
federated learning; data heterogeneity; Byzantine-robustness;
D O I
10.1109/ACSAC63791.2024.00067
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning (FL), an emerging distributed machine learning paradigm, has been applied to various privacy-preserving scenarios. However, due to its distributed nature, FL faces two key issues: the non-independent and identical distribution (non-IID) of user data and vulnerability to Byzantine threats. To address these challenges, in this paper, we propose FedCAP, a robust FL framework against both data heterogeneity and Byzantine attacks. The core of FedCAP is a model update calibration mechanism to help a server capture the differences in the direction and magnitude of model updates among clients. Furthermore, we design a customized model aggregation rule that facilitates collaborative training among similar clients while accelerating the model deterioration of malicious clients. With a Euclidean norm-based anomaly detection mechanism, the server can quickly identify and permanently remove malicious clients. Moreover, the impact of data heterogeneity and Byzantine attacks can be further mitigated through personalization on the client side. We conduct extensive experiments, comparing multiple state-of-the-art baselines, to demonstrate that FedCAP performs well in several non-IID settings and shows strong robustness under a series of poisoning attacks.
引用
收藏
页码:747 / 760
页数:14
相关论文
共 44 条
[1]  
Allen-Zhu Z., 2021, 9 INT C LEARN REPR I
[2]  
Baruch M., 2019, A Little is Enough: Circumventing Defenses for Distributed Learning, V32
[3]  
Blanchard P, 2017, ADV NEUR IN, V30
[4]   Federated learning with hierarchical clustering of local updates to improve training on non-IID data [J].
Briggs, Christopher ;
Fan, Zhong ;
Andras, Peter .
2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
[5]  
Brown TB, 2020, ADV NEUR IN, V33
[6]   FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping [J].
Cao, Xiaoyu ;
Fang, Minghong ;
Liu, Jia ;
Gong, Neil Zhenqiang .
28TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2021), 2021,
[7]  
Chen Hong-You, 2022, INT C LEARN REPR
[8]  
Cohen G, 2017, IEEE IJCNN, P2921, DOI 10.1109/IJCNN.2017.7966217
[9]  
Collins L., 2022, Fedavg with fine tuning: Local updates lead to representation learning, V35, p10 572
[10]  
Dai YT, 2023, AAAI CONF ARTIF INTE, P7314