CGKDFL: A Federated Learning Approach Based on Client Clustering and Generator-Based Knowledge Distillation for Heterogeneous Data

被引:0
作者
Zhang, Sanfeng [1 ]
Xu, Hongzhen [2 ]
Yu, Xiaojun [2 ]
机构
[1] East China Univ Technol, Sch Informat Engn, Nanchang, Peoples R China
[2] East China Univ Technol, Sch Software, Nanchang, Peoples R China
基金
中国国家自然科学基金;
关键词
clustering; federated learning; generator; heterogeneous data; knowledge distillation;
D O I
10.1002/cpe.70048
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
In practical, real-world complex networks, data distribution is frequently decentralized and Non-Independently Identically Distributed (Non-IID). This heterogeneous data presents a significant challenge for federated learning. Such problems include the generation of biased global models, the lack of sufficient personalization capability of local models, and the difficulty in absorbing global knowledge. We propose a Federated Learning Approach Based on Client Clustering and Generator-based Knowledge Distillation(CGKDFL) for heterogeneous data. Firstly, to reduce the global model bias, we propose a clustering federated learning approach that only requires each client to transmit some of the parameters of the selected layer, thus reducing the number of parameters. Subsequently, to circumvent the absence of global knowledge resulting from clustering, a generator designed to improve privacy features and increase diversity is developed on the server side. This generator produces feature representation data that aligns with the specific tasks of the client by utilizing the labeling information provided by the client. This is achieved without the need for any external dataset. The generator then transfers its global knowledge to the local model. The client can then utilize this information for knowledge distillation. Finally, extensive experiments were conducted on three heterogeneous datasets. The results demonstrate that CGKDFL outperforms the baseline method by a minimum of 7.24%$$ 7.24\% $$, 6.73%$$ 6.73\% $$, and 3.13%$$ 3.13\% $$ regarding accuracy on the three heterogeneous datasets. Additionally, it outperforms the compared methods regarding convergence speed in all cases.
引用
收藏
页数:14
相关论文
共 62 条
  • [1] Increasing trust in AI through privacy preservation and model explainability: Federated Learning of Fuzzy Regression Trees
    Barcena, Jose Luis Corcuera
    Ducange, Pietro
    Marcelloni, Francesco
    Renda, Alessandro
    [J]. INFORMATION FUSION, 2025, 113
  • [2] Federated learning with hierarchical clustering of local updates to improve training on non-IID data
    Briggs, Christopher
    Fan, Zhong
    Andras, Peter
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [3] Data-Free Learning of Student Networks
    Chen, Hanting
    Wang, Yunhe
    Xu, Chang
    Yang, Zhaohui
    Liu, Chuanjian
    Shi, Boxin
    Xu, Chunjing
    Xu, Chao
    Tian, Qi
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 3513 - 3521
  • [4] Advancements in Federated Learning: Models, Methods, and
    Chen, Huiming
    Wang, Huandong
    Long, Qingyue
    Jin, Depeng
    Li, Yong
    [J]. ACM COMPUTING SURVEYS, 2025, 57 (02)
  • [5] MetaFed: Federated Learning Among Federations With Cyclic Knowledge Distillation for Personalized Healthcare
    Chen, Yiqiang
    Lu, Wang
    Qin, Xin
    Wang, Jindong
    Xie, Xing
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, : 16671 - 16682
  • [6] Generative Adversarial Networks An overview
    Creswell, Antonia
    White, Tom
    Dumoulin, Vincent
    Arulkumaran, Kai
    Sengupta, Biswa
    Bharath, Anil A.
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2018, 35 (01) : 53 - 65
  • [7] Non-IID data and Continual Learning processes in Federated Learning: A long road ahead
    Criado, Marcos F.
    Casado, Fernando E.
    Iglesias, Roberto
    V. Regueiro, Carlos
    Barro, Senen
    [J]. INFORMATION FUSION, 2022, 88 : 263 - 280
  • [8] A smart surveillance system utilizing modified federated machine learning: Gossip-verifiable and quantum-safe approach
    Dharani, Dharmaraj
    Anitha Kumari, Kumarasamy
    [J]. CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2024, 36 (24)
  • [9] Anonymous federated learning framework in the internet of things
    Du, Ruizhong
    Liu, Chuan
    Gao, Yan
    [J]. CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2024, 36 (02)
  • [10] Elhussein A., 2023, Machine Learning for Healthcare Conference, V219, P150