ICB FL: Implicit Class Balancing Towards Fairness in Federated Learning

被引:1
|
作者
Li, Yanli [1 ]
Zhong, Laicheng [1 ]
Yuan, Dong [1 ]
Chen, Huaming [1 ]
Bao, Wei [1 ]
机构
[1] Univ Sydney, Sydney, NSW, Australia
关键词
Federated Learning; Fairness; Clustering;
D O I
10.1145/3579375.3579392
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated learning (FL) is a promising machine learning paradigm that allows many clients jointly train a model without sharing the raw data. As the standard FL has been designed from the server's perspective, the unfairness issue may occur throughout the whole learning process including the global model optimization phase. Some existing works have attempted this issue to guarantee the global model achieves a similar accuracy across different classes (i.e., labels), but failed to consider the implicit classes (different representations of one label) under them in which the fairness issue persists. In this paper, we focus on the fairness issue in the global model optimization phase and mitigate the research gap by introducing the Implicit Class Balancing (ICB) Federated Learning framework with Single Class Training Scheme (SCTS). In ICB FL, the server first broadcasts the current global model and assigns a particular class (label) for each client. Then, each client locally trains the model only with the assigned class data (SCTS) and sends the gradient back to the server. The server subsequently performs unsupervised learning to identify the implicit classes and generates the balanced weight for each client. Finally, the server averages the gradient received with weights, and updates the global model. We evaluate our ICB FL in three datasets, and the experimental results show that our ICB FL can effectively enhance fairness across explicit and implicit classes.
引用
收藏
页码:135 / 142
页数:8
相关论文
共 50 条
  • [1] Towards Fairness-Aware Federated Learning
    Shi, Yuxin
    Yu, Han
    Leung, Cyril
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (09) : 11922 - 11938
  • [2] DDPG-FL: A Reinforcement Learning Approach for Data Balancing in Federated Learning
    Ouyang, Bei
    Li, Jingyi
    Chen, Xu
    FRONTIERS OF NETWORKING TECHNOLOGIES, CCF CHINANET 2023, 2024, 1988 : 33 - 47
  • [3] Personalized Federated Learning towards Communication Efficiency, Robustness and Fairness
    Lin, Shiyun
    Han, Yuze
    Li, Xiang
    Zhang, Zhihua
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [4] Multi-objective federated learning: Balancing global performance and individual fairness
    Shen, Yuhao
    Xi, Wei
    Cai, Yunyun
    Fan, Yuwei
    Yang, He
    Zhao, Jizhong
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2025, 162
  • [5] FL-OTCSEnc: Towards secure federated learning with deep compressed sensing
    Wu, Leming
    Jin, Yaochu
    Yan, Yuping
    Hao, Kuangrong
    KNOWLEDGE-BASED SYSTEMS, 2024, 291
  • [6] FL-APB: Balancing Privacy Protection and Performance Optimization for Adversarial Training in Federated Learning
    Liu, Teng
    Wu, Hao
    Sun, Xidong
    Niu, Chaojie
    Yin, Hao
    ELECTRONICS, 2024, 13 (21)
  • [7] Fairness in Trustworthy Federated Learning: A Survey
    Chen H.-Y.
    Li Y.-D.
    Zhang H.-L.
    Chen N.-Y.
    Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2023, 51 (10): : 2985 - 3010
  • [8] Fairness and accuracy in horizontal federated learning
    Huang, Wei
    Li, Tianrui
    Wang, Dexian
    Du, Shengdong
    Zhang, Junbo
    Huang, Tianqiang
    INFORMATION SCIENCES, 2022, 589 (170-185) : 170 - 185
  • [9] Federated Learning with Class Balanced Loss Optimized by Implicit Stochastic Gradient Descent
    Zhou, Jincheng
    Zheng, Maoxing
    SOFT COMPUTING IN DATA SCIENCE, SCDS 2023, 2023, 1771 : 121 - 135
  • [10] SARS: A Personalized Federated Learning Framework Towards Fairness and Robustness against Backdoor Attacks
    Zhang, Webin
    Li, Youpeng
    An, Lingling
    Wan, Bo
    Wang, Xuyu
    PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2024, 8 (04):