Resource-Aware Split Federated Learning for Edge Intelligence

被引:0
作者
Arouj, Amna [1 ]
Abdelmoniem, Ahmed M. [1 ]
Alhilal, Ahmad [2 ]
You, Linlin [3 ]
Wang, Chen [4 ]
机构
[1] Queen Mary Univ London, London, England
[2] Hong Kong Univ Sci & Tech, Hong Kong, Peoples R China
[3] Sun Yat Sen Univ, Guangzhou, Peoples R China
[4] Huazhong Univ Sci & Technol, Wuhan, Peoples R China
来源
PROCEEDINGS 2024 IEEE 3RD WORKSHOP ON MACHINE LEARNING ON EDGE IN SENSOR SYSTEMS, SENSYS-ML 2024 | 2024年
关键词
Federated Learning; Heterogeneity; Resource-Aware; Offloading;
D O I
10.1109/SenSys-ML62579.2024.00008
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This research investigates Federated Learning (FL) systems, wherein multiple edge devices cooperate to train a collective machine learning model using their locally distributed data. The focus is on addressing energy consumption challenges in battery-constrained devices and mitigating the negative impact of intensive on-device computations during the training phase. In the widespread adoption of FL, variations in clients' computational capabilities and battery levels lead to system stragglers/dropouts, causing a decline in training quality. To enhance FL's energy efficiency, we propose EAFL+, a pioneering cloud-edge-terminal collaborative approach. EAFL+ introduces a novel architectural design aimed at achieving power-aware FL training by capitalizing on resource diversity and computation offloading. It facilitates the efficient selection of an approximately-optimal offloading target from Cloud-tier, Edge-tier, and Terminal-tier resources, optimizing the cost-quality tradeoff for participating client devices in the FL process. The presented algorithm minimizes the dropouts during training, enhancing participation rates and amplifying clients' contributions, resulting in better accuracy and convergence. Through experiments conducted on FL datasets and traces within a simulated FL environment, we find EAFL+ eradicates client drop-outs and enhances accuracy by up to 24% compared to state-of-the-art methods.
引用
收藏
页码:15 / 20
页数:6
相关论文
共 41 条
[1]   Towards Mitigating Device Heterogeneity in Federated Learning via Adaptive Model Quantization [J].
Abdelmoniem, Ahmed M. ;
Canini, Marco .
PROCEEDINGS OF THE 1ST WORKSHOP ON MACHINE LEARNING AND SYSTEMS (EUROMLSYS'21), 2021, :96-103
[2]  
Abdelmoniem Ahmed M., 2023, REFL: Resource-Efficient Federated Learning
[3]   A Comprehensive Empirical Study of Heterogeneity in Federated Learning [J].
Abdelmoniem, Ahmed M. M. ;
Ho, Chen-Yu ;
Papageorgiou, Pantelis ;
Canini, Marco .
IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (16) :14071-14083
[4]  
AI Benchmark, 2021, Performance Ranking
[5]  
Apple, 2022, Siri
[6]  
Arouj Amna, 2022, ACM FEDEDGE WORKSH
[7]  
Bonawitz K., 2019, Machine Learning and Systems
[8]  
Bonawitz K, 2019, CONF REC ASILOMAR C, P1222, DOI [10.1109/IEEECONF44664.2019.9049066, 10.1109/ieeeconf44664.2019.9049066]
[9]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[10]  
Chip 1 Exchange, 2022, The Wave of Wearables