Robust Heterogeneous Federated Learning under Data Corruption

被引:17
作者
Fang, Xiuwen [1 ]
Ye, Mang [1 ]
Yang, Xiyuan [1 ]
机构
[1] Wuhan Univ, Natl Engn Res Ctr Multimedia Software, Hubei Key Lab Multimedia & Network Commun Engn, Inst Artificial Intelligence,Sch Comp Sci,Hubei L, Wuhan, Peoples R China
来源
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV | 2023年
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ICCV51070.2023.00463
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Model heterogeneous federated learning is a realistic and challenging problem. However, due to the limitations of data collection, storage, and transmission conditions, as well as the existence of free-rider participants, the clients may suffer from data corruption. This paper starts the first attempt to investigate the problem of data corruption in the model heterogeneous federated learning framework. We design a novel method named Augmented Heterogeneous Federated Learning (AugHFL), which consists of two stages: 1) In the local update stage, a corruption-robust data augmentation strategy is adopted to minimize the adverse effects of local corruption while enabling the models to learn rich local knowledge. 2) In the collaborative update stage, we design a robust re-weighted communication approach, which implements communication between heterogeneous models while mitigating corrupted knowledge transfer from others. Extensive experiments demonstrate the effectiveness of our method in coping with various corruption patterns in the model heterogeneous federated learning setting.
引用
收藏
页码:4997 / 5007
页数:11
相关论文
共 72 条
[1]   The effects of adding noise during backpropagation training on a generalization performance [J].
An, GZ .
NEURAL COMPUTATION, 1996, 8 (03) :643-674
[2]  
[Anonymous], 2016, ICML
[3]  
[Anonymous], 2019, ICML
[4]  
[Anonymous], 2022, CVPR, DOI DOI 10.1109/CVPR52688.2022.00987
[5]  
[Anonymous], 2016, 2016 eighth international conference on quality of multimedia experience (QoMEX), DOI [DOI 10.1109/QOMEX.2016.7498955, 10.1109/QoMEX.2016.7498955]
[6]  
Atienza R., 2022, P WINT C APPL COMP V, P372
[7]  
Bao H., 2020, INT C MACHINE LEARNI, P642
[8]  
Byrd David, 2020, ICAIF
[9]  
Camuto Alexander, 2020, NEURIPS
[10]  
Chang Hongyan, 2019, NEURIPS, P3