Training Heterogeneous Client Models using Knowledge Distillation in Serverless Federated Learning

被引:2
作者
Chadha, Mohak [1 ]
Khera, Pulkit [1 ]
Gu, Jianfeng [1 ]
Abboud, Osama [2 ]
Gerndt, Michael [1 ]
机构
[1] Tech Univ Munich, Munich, Germany
[2] Huawei Technol, Dusseldorf, Germany
来源
39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024 | 2024年
关键词
Federated Learning; Serverless Computing; FaaS; Deep Learning; Scalability of learning algorithms; Knowledge Distillation;
D O I
10.1145/3605098.3636015
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Federated Learning (FL) is an emerging machine learning paradigm that enables the collaborative training of a shared global model across distributed clients while keeping the data decentralized. Recent works on designing systems for efficient FL have shown that utilizing serverless computing technologies, particularly Functionas-a-Service (FaaS) for FL, can enhance resource efficiency, reduce training costs, and alleviate the complex infrastructure management burden on data holders. However, existing serverless FL systems implicitly assume a uniform global model architecture across all participating clients during training. This assumption fails to address fundamental challenges in practical FL due to the resource and statistical data heterogeneity among FL clients. To address these challenges and enable heterogeneous client models in serverless FL, we utilize Knowledge Distillation (KD) in this paper. Towards this, we propose novel optimized serverless workflows for two popular conventional federated KD techniques, i.e., FedMD and FedDF. We implement these workflows by introducing several extensions to an open-source serverless FL system called FedLess. Moreover, we comprehensively evaluate the two strategies on multiple datasets across varying levels of client data heterogeneity using heterogeneous client models with respect to accuracy, fine-grained training times, and costs. Results from our experiments demonstrate that serverless FedDF is more robust to extreme non-IID data distributions, is faster, and leads to lower costs than serverless FedMD. In addition, compared to the original implementation, our optimizations for particular steps in FedMD and FedDF lead to an average speedup of 3.5x and 1.76x across all datasets.
引用
收藏
页码:997 / 1006
页数:10
相关论文
共 38 条
[1]  
amazon, 2023, AWS Lambda Limits
[2]  
[Anonymous], 2023, Nietsche Text corpus
[3]  
Arivazhagan M. G., 2019, ARXIV
[4]  
Beutel DJ, 2022, Arxiv, DOI arXiv:2007.14390
[5]  
Bistritz Ilai, 2020, P 34 INT C NEUR INF
[6]  
Caldas S., 2018, arXiv
[7]   The Rise of Serverless Computing [J].
Castro, Paul ;
Ishakian, Vatche ;
Muthusamy, Vinod ;
Slominski, Aleksander .
COMMUNICATIONS OF THE ACM, 2019, 62 (12) :44-54
[8]   Architecture-Specific Performance Optimization of Compute-Intensive FaaS Functions [J].
Chadha, Mohak ;
Jindal, Anshul ;
Gerndt, Michael .
2021 IEEE 14TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING (CLOUD 2021), 2021, :478-483
[9]   Towards Federated Learning using FaaS Fabric [J].
Chadha, Mohak ;
Jindal, Anshul ;
Gerndt, Michael .
PROCEEDINGS OF THE 2020 SIXTH INTERNATIONAL WORKSHOP ON SERVERLESS COMPUTING (WOSC '20), 2020, :49-54
[10]   funcX: A Federated Function Serving Fabric for Science [J].
Chard, Ryan ;
Babuji, Yadu ;
Li, Zhuozhao ;
Skluzacek, Tyler ;
Woodard, Anna ;
Blaiszik, Ben ;
Foster, Ian ;
Chard, Kyle .
PROCEEDINGS OF THE 29TH INTERNATIONAL SYMPOSIUM ON HIGH-PERFORMANCE PARALLEL AND DISTRIBUTED COMPUTING, HPDC 2020, 2020, :65-76