Faster Convergence on Heterogeneous Federated Edge Learning: An Adaptive Clustered Data Sharing Approach

被引:0
作者
Hu, Gang [1 ]
Teng, Yinglei [1 ]
Wang, Nan [1 ]
Han, Zhu [2 ,3 ]
机构
[1] Beijing Univ Posts & Telecommun BUPT, Beijing Key Lab Work Safety Intelligent Monitoring, Beijing 100876, Peoples R China
[2] Univ Houston, Dept Elect & Comp Engn, Houston, TX 77004 USA
[3] Kyung Hee Univ, Dept Comp Sci & Engn, Seoul 446701, South Korea
基金
日本科学技术振兴机构; 中国国家自然科学基金; 国家重点研发计划;
关键词
Training; Data models; Accuracy; Optimization; Distributed databases; Convergence; Delays; Data privacy; Costs; Computational modeling; 6G; federated learning; non-IID data; multicasting; sidelink; data sharing;
D O I
10.1109/TMC.2025.3533566
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Edge Learning (FEL) emerges as a pioneering distributed machine learning paradigm for the 6 G Hyper-Connectivity, harnessing data from the IoT devices while upholding data privacy. However, current FEL algorithms struggle with non-independent and non-identically distributed (non-IID) data, leading to elevated communication costs and compromised model accuracy. To address these statistical imbalances, we introduce a clustered data sharing framework, mitigating data heterogeneity by selectively sharing partial data from cluster heads to trusted associates through sidelink-aided multicasting. The collective communication pattern is integral to FEL training, where both cluster formation and the efficiency of communication and computation impact training latency and accuracy simultaneously. To tackle the strictly coupled data sharing and resource optimization, we decompose the optimization problem into the clients clustering and effective data sharing subproblems. Specifically, a distribution-based adaptive clustering algorithm (DACA) is devised basing on three deductive cluster forming conditions, which ensures the maximum sharing yield. Meanwhile, we design a stochastic optimization based joint computed frequency and shared data volume optimization (JFVO) algorithm, determining the optimal resource allocation with an uncertain objective function. The experiments show that the proposed framework facilitates FEL on non-IID datasets with faster convergence rate and higher model accuracy in a resource-limited environment.
引用
收藏
页码:5342 / 5356
页数:15
相关论文
共 48 条
[1]  
3GPP, 2017, 38901 3GPP TR
[2]   Large-Scale Machine Learning with Stochastic Gradient Descent [J].
Bottou, Leon .
COMPSTAT'2010: 19TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL STATISTICS, 2010, :177-186
[3]   D2D-Enabled Data Sharing for Distributed Machine Learning at Wireless Network Edge [J].
Cai, Xiaoran ;
Mo, Xiaopeng ;
Chen, Junyang ;
Xu, Jie .
IEEE WIRELESS COMMUNICATIONS LETTERS, 2020, 9 (09) :1457-1461
[4]  
Caldas S., 2018, arXiv
[5]   FedCluster: Boosting the Convergence of Federated Learning via Cluster-Cycling [J].
Chen, Cheng ;
Chen, Ziyi ;
Zhou, Yi ;
Kailkhura, Bhavya .
2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2020, :5017-5026
[6]   A Joint Learning and Communications Framework for Federated Learning Over Wireless Networks [J].
Chen, Mingzhe ;
Yang, Zhaohui ;
Saad, Walid ;
Yin, Changchuan ;
Poor, H. Vincent ;
Cui, Shuguang .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2021, 20 (01) :269-283
[7]   Beyond Complexity Limits: Machine Learning for Sidelink-Assisted mmWave Multicasting in 6G [J].
Chukhno, Nadezhda ;
Chukhno, Olga ;
Pizzi, Sara ;
Molinaro, Antonella ;
Iera, Antonio ;
Araniti, Giuseppe .
IEEE TRANSACTIONS ON BROADCASTING, 2024, 70 (03) :1076-1090
[8]   Models, Methods, and Solutions for Multicasting in 5G/6G mmWave and Sub-THz Systems [J].
Chukhno, Nadezhda ;
Chukhno, Olga ;
Moltchanov, Dmitri ;
Pizzi, Sara ;
Gaydamaka, Anna ;
Samuylov, Andrey ;
Molinaro, Antonella ;
Koucheryavy, Yevgeni ;
Iera, Antonio ;
Araniti, Giuseppe .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2024, 26 (01) :119-159
[9]   Gradient and Channel Aware Dynamic Scheduling for Over-the-Air Computation in Federated Edge Learning Systems [J].
Du, Jun ;
Jiang, Bingqing ;
Jiang, Chunxiao ;
Shi, Yuanming ;
Han, Zhu .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2023, 41 (04) :1035-1050
[10]   An Efficient Framework for Clustered Federated Learning [J].
Ghosh, Avishek ;
Chung, Jichan ;
Yin, Dong ;
Ramchandran, Kannan .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2022, 68 (12) :8076-8091