Cooperative Multi-Model Training for Personalized Federated Learning Over Heterogeneous Devices

被引:0
|
作者
Xu, Jian [1 ]
Wan, Shuo [2 ]
Li, Yinchuan [2 ]
Luo, Sichun [3 ]
Chen, Zhilin [2 ]
Shao, Yunfeng [2 ]
Chen, Zhitang [2 ]
Huang, Shao-Lun [1 ]
Song, Linqi [3 ]
机构
[1] Tsinghua Univ, Tsinghua Shenzhen Int Grad Sch, Beijing 100190, Peoples R China
[2] Huawei Technol, Noahs Ark Lab, Beijing 100085, Peoples R China
[3] City Univ Hong Kong, Shenzhen Res Inst, Hong Kong, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Training; Performance evaluation; Data privacy; Computer vision; Federated learning; Computational modeling; Computer architecture; Benchmark testing; Feature extraction; Data models; heterogeneous devices; multiple models; label shift; personalization;
D O I
10.1109/JSTSP.2024.3497660
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Federated learning (FL) is an increasingly popular paradigm for protecting data privacy in machine learning systems. However, the data heterogeneity and high computation cost/latency are challenging barriers for employing FL in real-world applications with heterogeneous devices. In this paper, we propose a novel personalized FL framework named CompFL allowing cooperative training of models with varied structures to mitigate those issues. First, CompFL initializes a set of expert models in varied sizes and allows each client to choose one or multiple expert models for training according to their capacities. Second, CompFL combines the model decoupling strategy and local-global feature alignment to mitigate the adverse impact of label heterogeneity, where clients only share the feature extractor part for each model architecture. Third, to encourage mutual enhancement of various models, knowledge distillation in local training is further applied to improve the overall performance. To make our framework workable in real systems, we implement it in both centralized settings with server-coordinated parallel training, and decentralized settings with newly developed device-to-device training-forwarding schemes. Extensive experiments on benchmark datasets are conducted to verify the potential of our framework for personalized FL over heterogeneous devices.
引用
收藏
页码:195 / 207
页数:13
相关论文
共 50 条
  • [41] An On-Device Federated Learning Approach for Cooperative Model Update Between Edge Devices
    Ito, Rei
    Tsukada, Mineto
    Matsutani, Hiroki
    IEEE ACCESS, 2021, 9 : 92986 - 92998
  • [42] Personalized Federated Learning with Multi-branch Architecture
    Mori, Junki
    Yoshiyama, Tomoyuki
    Furukawa, Ryo
    Teranishi, Isamu
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [43] An efficient personalized federated learning approach in heterogeneous environments: a reinforcement learning perspective
    Yang, Hongwei
    Li, Juncheng
    Hao, Meng
    Zhang, Weizhe
    He, Hui
    Sangaiah, Arun Kumar
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [44] FedVQA: Personalized Federated Visual Question Answering over Heterogeneous Scenes
    Lao, Mingrui
    Pu, Nan
    Zhong, Zhun
    Sebe, Nicu
    Lew, Michael S.
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 7796 - 7807
  • [45] FedClassAvg: Local Representation Learning for Personalized Federated Learning on Heterogeneous Neural Networks
    Jang, Jaehee
    Ha, Heonseok
    Jung, Dahuin
    Yoon, Sungroh
    51ST INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, ICPP 2022, 2022,
  • [46] To be global or personalized: Generalized federated learning with cooperative adaptation for data heterogeneity
    Ding, Kaijian
    Feng, Xiang
    Yu, Huiqun
    KNOWLEDGE-BASED SYSTEMS, 2024, 301
  • [47] ESFL: Efficient Split Federated Learning Over Resource-Constrained Heterogeneous Wireless Devices
    Zhu, Guangyu
    Deng, Yiqin
    Chen, Xianhao
    Zhang, Haixia
    Fang, Yuguang
    Wong, Tan F.
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (16): : 27153 - 27166
  • [48] Hydra: Hybrid-model federated learning for human activity recognition on heterogeneous devices
    Wang, Pu
    Ouyang, Tao
    Wu, Qiong
    Gong, Jie
    Chen, Xu
    JOURNAL OF SYSTEMS ARCHITECTURE, 2024, 147
  • [49] FLIGHT: Federated Learning with IRS for Grouped Heterogeneous Training
    Yin T.
    Li L.
    Ma D.
    Lin W.
    Liang J.
    Han Z.
    Journal of Communications and Information Networks, 2022, 7 (02): : 135 - 146
  • [50] Enhancing the Generalization of Personalized Federated Learning with Multi-head Model and Ensemble Voting
    Le, Van An
    Tran, Nam Duong
    Nguyen, Phuong Nam
    Nguyen, Thanh Hung
    Nguyen, Phi Le
    Nguyen, Truong Thao
    Ji, Yusheng
    PROCEEDINGS 2024 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM, IPDPS 2024, 2024, : 205 - 216