FedPA: Generator-Based Heterogeneous Federated Prototype Adversarial Learning

被引:0
作者
Jiang, Lei [1 ]
Wang, Xiaoding [1 ]
Yang, Xu [2 ]
Shu, Jiwu [2 ,3 ]
Lin, Hui [1 ]
Yi, Xun [4 ]
机构
[1] Fujian Normal Univ, Coll Comp & Cyber Secur, Fuzhou 350117, Peoples R China
[2] Minjiang Univ, Coll Comp & Data Sci, Fuzhou 350108, Peoples R China
[3] Tsinghua Univ, Dept Comp Sci & Technol, Beijing 100084, Peoples R China
[4] RMIT Univ, Sch Comp Technol, Melbourne, Vic 3000, Australia
关键词
Feature extraction; Prototypes; Generators; Data models; Servers; Federated learning; Training; Feature mining; federated learning; model regularization; privacy protection; prototype learning;
D O I
10.1109/TDSC.2024.3419211
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning is an emerging distributed algorithm that is designed to collaboratively train the global model without accessing clients' private data. However, heterogeneity of data among clients leads to significant degradation in model performance. Some studies suggest adopting model regularization and using generators to enrich datasets with diverse features can effectively enhance model performance. But current research focuses on regularizing specific modules of the model, failing to achieve regularization across the entire model, and offering limited mitigation of bias from heterogeneous data. Moreover, few methods consider that generators often produce samples with simple features, and the direct use for generating raw data can raise privacy concerns. To solve these challenges, we propose a generator-based heterogeneous Federated Prototype Adversarial Learning framework, named FedPA, which combines prototype learning and lightweight generators to achieve regularization of the entire model. Our generators are designed to generate features rather than raw data, and use prototype learning to find the hard features in an adversarial learning manner, thereby improving model performance. Experimental results show that FedPA improves test accuracy by 3.7% compared to state-of-the-art methods, validating that FedPA can effectively mitigate model bias.
引用
收藏
页码:939 / 949
页数:11
相关论文
共 31 条
  • [1] Ben-David S., 2006, Advances in neural information processing systems, V19
  • [2] A theory of learning from different domains
    Ben-David, Shai
    Blitzer, John
    Crammer, Koby
    Kulesza, Alex
    Pereira, Fernando
    Vaughan, Jennifer Wortman
    [J]. MACHINE LEARNING, 2010, 79 (1-2) : 151 - 175
  • [3] Chen H., 2023, P 11 INT C LEARN REP
  • [4] Dai YT, 2023, AAAI CONF ARTIF INTE, P7314
  • [5] Gao LZ, 2024, Arxiv, DOI arXiv:2201.03169
  • [6] Arivazhagan MG, 2019, Arxiv, DOI arXiv:1912.00818
  • [7] Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672
  • [8] Hsu TMH, 2019, Arxiv, DOI arXiv:1909.06335
  • [9] He YT, 2022, AAAI CONF ARTIF INTE, P12967
  • [10] Hinton G, 2015, ARXIV150302531, V2