Robust Federated Learning with Noisy and Heterogeneous Clients

被引:92
作者
Fang, Xiuwen [1 ]
Ye, Mang [1 ,2 ]
机构
[1] Wuhan Univ, Natl Engn Res Ctr Multimedia Software, Sch Comp Sci, Inst Artificial Intelligence,Hubei Key Lab Multim, Wuhan, Peoples R China
[2] Hubei Luojia Lab, Wuhan, Peoples R China
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2022年
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52688.2022.00983
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Model heterogeneous federated learning is a challenging task since each client independently designs its own model. Due to the annotation difficulty and free-riding participant issue, the local client usually contains unavoidable and varying noises, which cannot be effectively addressed by existing algorithms. This paper starts the first attempt to study a new and challenging robust federated learning problem with noisy and heterogeneous clients. We present a novel solution RHFL (Robust Heterogeneous Federated Learning), which simultaneously handles the label noise and performs federated learning in a single framework. It is featured in three aspects: (1) For the communication between heterogeneous models, we directly align the models feedback by utilizing public data, which does not require additional shared global models for collaboration. (2) For internal label noise, we apply a robust noise-tolerant loss function to reduce the negative effects. (3) For challenging noisy feedback from other participants, we design a novel client confidence re-weighting scheme, which adaptively assigns corresponding weights to each client in the collaborative learning stage. Extensive experiments validate the effectiveness of our approach in reducing the negative effects of different noise rates/types under both model homogeneous and heterogeneous federated learning settings, consistently outperforming existing methods.
引用
收藏
页码:10062 / 10071
页数:10
相关论文
共 58 条
[1]  
[Anonymous], 2015, NEURIPS
[2]  
[Anonymous], 2020, MindSpore
[3]  
[Anonymous], 2019, CVPR, DOI DOI 10.1109/CVPR.2019.00519
[4]  
[Anonymous], 2018, NEURIPS
[5]  
[Anonymous], 2021, ICML
[6]  
[Anonymous], 2017, ICML
[7]  
Arcas Y. B. A., 2017, PMLR, V54, P1273, DOI DOI 10.48550/ARXIV.1602.05629
[8]  
Chopra Ayush, 2021, ARXIV211201637
[9]   Distributed Recurrent Autoencoder for Scalable Image Compression [J].
Diao, Enmao ;
Ding, Jie ;
Tarokh, Vahid .
2020 DATA COMPRESSION CONFERENCE (DCC 2020), 2020, :3-12
[10]  
Ghosh A, 2017, AAAI CONF ARTIF INTE, P1919