A Hybrid Architecture for Federated and Centralized Learning

被引:32
作者
Elbir, Ahmet M. [1 ,2 ]
Coleri, Sinem [3 ]
Papazafeiropoulos, Anastasios K. [2 ,4 ]
Kourtessis, Pandelis [4 ]
Chatzinotas, Symeon [2 ]
机构
[1] Duzce Univ, Dept Elect & Elect Engn, TR-81620 Duzce, Turkey
[2] Univ Luxembourg, SnT, L-1855 Luxembourg, Luxembourg
[3] Koc Univ, Dept Elect & Elect Engn, TR-34450 Istanbul, Turkey
[4] Univ Hertfordshire, CIS Res Grp, Hatfield AL10 9AB, Herts, England
关键词
Machine learning; federated learning; centralized learning; edge intelligence; edge efficiency; RESOURCE-ALLOCATION; INTELLIGENCE; DESIGN;
D O I
10.1109/TCCN.2022.3181032
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Many of the machine learning tasks rely on centralized learning (CL), which requires the transmission of local datasets from the clients to a parameter server (PS) entailing huge communication overhead. To overcome this, federated learning (FL) has been suggested as a promising tool, wherein the clients send only the model updates to the PS instead of the whole dataset. However, FL demands powerful computational resources from the clients. In practice, not all the clients have sufficient computational resources to participate in training. To address this common scenario, we propose a more efficient approach called hybrid federated and centralized learning (HFCL), wherein only the clients with sufficient resources employ FL, while the remaining ones send their datasets to the PS, which computes the model on behalf of them. Then, the model parameters are aggregated at the PS. To improve the efficiency of dataset transmission, we propose two different techniques: i) increased computation-per-client and ii) sequential data transmission. Notably, the HFCL frameworks outperform FL with up to 20% improvement in the learning accuracy when only half of the clients perform FL while having 50% less communication overhead than CL since all the clients collaborate on the learning process with their datasets.
引用
收藏
页码:1529 / 1542
页数:14
相关论文
共 47 条
[1]   Federated Learning Over Wireless Fading Channels [J].
Amiri, Mohammad Mohammadi ;
Gunduz, Deniz .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2020, 19 (05) :3546-3557
[2]   Machine Learning at the Wireless Edge: Distributed Stochastic Gradient Descent Over-the-Air [J].
Amiri, Mohammad Mohammadi ;
Gunduz, Deniz .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2020, 68 (68) :2155-2169
[3]   Robust Federated Learning With Noisy Communication [J].
Ang, Fan ;
Chen, Li ;
Zhao, Nan ;
Chen, Yunfei ;
Wang, Weidong ;
Yu, F. Richard .
IEEE TRANSACTIONS ON COMMUNICATIONS, 2020, 68 (06) :3452-3464
[4]   TRAINING WITH NOISE IS EQUIVALENT TO TIKHONOV REGULARIZATION [J].
BISHOP, CM .
NEURAL COMPUTATION, 1995, 7 (01) :108-116
[5]  
Elbir A, 2022, Arxiv, DOI arXiv:2006.01412
[6]  
Elbir AM, 2022, Arxiv, DOI arXiv:2009.02540
[7]   Federated Learning for Channel Estimation in Conventional and RIS-Assisted Massive MIMO [J].
Elbir, Ahmet M. ;
Coleri, Sinem .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (06) :4255-4268
[8]  
Elbir AM, 2021, EUR SIGNAL PR CONF, P1541, DOI 10.23919/EUSIPCO54536.2021.9616120
[9]   Federated Learning for Physical Layer Design [J].
Elbir, Ahmet M. ;
Papazafeiropoulos, Anastasios K. ;
Chatzinotas, Symeon .
IEEE COMMUNICATIONS MAGAZINE, 2021, 59 (11) :81-87
[10]   Cognitive Learning-Aided Multi-Antenna Communications [J].
Elbir, Ahmet M. ;
Mishra, Kumar Vijay .
IEEE WIRELESS COMMUNICATIONS, 2022, 29 (06) :136-143