Federated Learning with Pareto Optimality for Resource Efficiency and Fast Model Convergence in Mobile Environments

被引:4
作者
Jung, June-Pyo [1 ]
Ko, Young-Bae [1 ]
Lim, Sung-Hwa [2 ]
机构
[1] Ajou Univ, Dept AI Convergence Network, 206 World Cup Ro, Suwon 16499, South Korea
[2] Namseoul Univ, Dept Multimedia, 91 Daehak Ro, Cheonan Si 31020, South Korea
基金
新加坡国家研究基金会;
关键词
federated learning; Pareto optimality; mobile communication;
D O I
10.3390/s24082476
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Federated learning (FL) is an emerging distributed learning technique through which models can be trained using the data collected by user devices in resource-constrained situations while protecting user privacy. However, FL has three main limitations: First, the parameter server (PS), which aggregates the local models that are trained using local user data, is typically far from users. The large distance may burden the path links between the PS and local nodes, thereby increasing the consumption of the network and computing resources. Second, user device resources are limited, but this aspect is not considered in the training of the local model and transmission of the model parameters. Third, the PS-side links tend to become highly loaded as the number of participating clients increases. The links become congested owing to the large size of model parameters. In this study, we propose a resource-efficient FL scheme. We follow the Pareto optimality concept with the biased client selection to limit client participation, thereby ensuring efficient resource consumption and rapid model convergence. In addition, we propose a hierarchical structure with location-based clustering for device-to-device communication using k-means clustering. Simulation results show that with prate at 0.75, the proposed scheme effectively reduced transmitted and received network traffic by 75.89% and 78.77%, respectively, compared to the FedAvg method. It also achieves faster model convergence compared to other FL mechanisms, such as FedAvg and D2D-FedAvg.
引用
收藏
页数:17
相关论文
共 32 条
[1]  
Bonawitz K., 2019, Proc. Mach. Learn. Syst.
[2]   Fog and IoT: An Overview of Research Opportunities [J].
Chiang, Mung ;
Zhang, Tao .
IEEE INTERNET OF THINGS JOURNAL, 2016, 3 (06) :854-864
[3]  
Chilimbi Trishul, 2014, Proceedings of the 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI '14). OSDI '14, P571
[4]  
Cui S., 2021, ADV NEURAL INFORM PR, P26091
[5]   HED-FL: A hierarchical, energy efficient, and dynamic approach for edge Federated Learning [J].
De Rango, Floriano ;
Guerrieri, Antonio ;
Raimondo, Pierfrancesco ;
Spezzano, Giandomenico .
PERVASIVE AND MOBILE COMPUTING, 2023, 92
[6]  
Ding HC, 2014, 2014 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATION SYSTEMS (ICCS), P1, DOI 10.1109/ICCS.2014.7024754
[7]   Combining Federated Learning and Edge Computing Toward Ubiquitous Intelligence in 6G Network: Challenges, Recent Advances, and Future Directions [J].
Duan, Qiang ;
Huang, Jun ;
Hu, Shijing ;
Deng, Ruijun ;
Lu, Zhihui ;
Yu, Shui .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2023, 25 (04) :2892-2950
[8]  
Gaff BM, 2014, COMPUTER, V47, P7
[9]   A social-aware K means clustering algorithm for D2D multicast communication under SDN architecture [J].
Gong, Wenrong ;
Pang, Lihua ;
Wang, Jing ;
Xia, Meng ;
Zhang, Yuzhi .
AEU-INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATIONS, 2021, 132
[10]   Federated Learning Meets Multi-Objective Optimization [J].
Hu, Zeou ;
Shaloudegi, Kiarash ;
Zhang, Guojun ;
Yu, Yaoliang .
IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2022, 9 (04) :2039-2051