Learning Efficiency Maximization for Wireless Federated Learning With Heterogeneous Data and Clients

被引:3
作者
Ouyang, Jinhao [1 ]
Liu, Yuan [1 ]
机构
[1] South China Univ Technol, Sch Elect & Informat Engn, Guangzhou 510641, Peoples R China
基金
美国国家科学基金会;
关键词
Federated learning; Convergence; Data models; Servers; Training; Computational modeling; Particle measurements; Wireless federated learning; client contribution; learning efficiency; AGGREGATION; CONVERGENCE; ALLOCATION; NETWORKS;
D O I
10.1109/TCCN.2024.3394889
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Federated learning is a promising distributed learning paradigm for protecting data privacy by delegating learning tasks to local clients and aggregating local models, instead of raw data, to a server. However, heterogeneous data and clients slow down learning performance and cause significant communication overheads, which hinder the application of federated learning to wireless networks. To address this issue, in this paper, we develop a novel federated learning framework with contribution-aware client participation and batch size selection to maximize learning efficiency, which is equivalent to achieving a global optimal model using minimum time. We first analyze the impact of the client contribution-aware participation on the convergence rate. Then a learning efficiency maximization problem is formulated by jointly optimizing the contribution threshold and the data batch size. Due to the fractional structure of the objective function whose Hessian matrix is not positive semidefinite, the formulated problem is non-convex. We propose a two-layer iterative algorithm to optimally solve this non-convex problem. The effectiveness of the proposed scheme is evaluated using public datasets by comparing it with conventional benchmark schemes. Experimental results show that the proposed scheme achieves improvements in learning efficiency by up to 19.11% on the MNIST dataset and 13.64% on the CIFAR-10 dataset, respectively, compared to benchmark schemes. These results demonstrate that the proposed scheme can effectively mitigate the influence of data and clients heterogeneity for learning efficiency maximization compared to benchmark schemes.
引用
收藏
页码:2282 / 2295
页数:14
相关论文
共 56 条
[51]   Energy-Efficient Resource Management for Federated Edge Learning With CPU-GPU Heterogeneous Computing [J].
Zeng, Qunsong ;
Du, Yuqing ;
Huang, Kaibin ;
Leung, Kin K. .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2021, 20 (12) :7947-7962
[52]   Data-Representation Aware Resource Scheduling for Edge Intelligence [J].
Zeng, Zhi ;
Liu, Yuan ;
Tang, Weijun .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2022, 71 (12) :13372-13376
[53]   Noise Is Useful: Exploiting Data Diversity for Edge Intelligence [J].
Zeng, Zhi ;
Liu, Yuan ;
Tang, Weijun ;
Chen, Fangjiong .
IEEE WIRELESS COMMUNICATIONS LETTERS, 2021, 10 (05) :957-961
[54]   Gradient Statistics Aware Power Control for Over-the-Air Federated Learning [J].
Zhang, Naifu ;
Tao, Meixia .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2021, 20 (08) :5115-5128
[55]  
Zhao Y, 2022, Arxiv, DOI [arXiv:1806.00582, 10.48550/ARXIV.1806.00582]
[56]   Broadband Analog Aggregation for Low-Latency Federated Edge Learning [J].
Zhu, Guangxu ;
Wang, Yong ;
Huang, Kaibin .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2020, 19 (01) :491-506