Optimizing Resource-Efficiency for Federated Edge Intelligence in IoT Networks

被引:0
作者
Xiao, Yong [1 ]
Li, Yingyu [1 ]
Shi, Guangming [2 ]
Poor, H. Vincent [3 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Elect Inform & Commun, Wuhan, Peoples R China
[2] Xidian Univ, Sch Artificial Intelligence, Xian, Peoples R China
[3] Princeton Univ, Sch Engn & Appl Sci, Princeton, NJ 08544 USA
来源
2020 12TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING (WCSP) | 2020年
基金
美国国家科学基金会;
关键词
6G; edge intelligence; federated learning; IoT;
D O I
10.1109/wcsp49889.2020.9299798
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
This paper studies an edge intelligence-based IoT network in which a set of edge servers learn a shared model using federated learning (FL) based on the datasets uploaded from a multi-technology-supported IoT network. The data uploading performance of IoT network and the computational capacity of edge servers are entangled with each other in influencing the FL model training process. We propose a novel framework, called federated edge intelligence (FEI), that allows edge servers to evaluate the required number of data samples according to the energy cost of the IoT network as well as their local data processing capacity and only request the amount of data that is sufficient for training a satisfactory model. We evaluate the energy cost for data uploading when two widely-used IoT solutions: licensed band IoT (e.g., 5G NB-IoT) and unlicensed band IoT (e.g., Wi-Fi, ZigBee, and 5G NR-U) are available to each IoT device. We prove that the cost minimization problem of the entire IoT network is separable and can be divided into a set of subproblems, each of which can be solved by an individual edge server. We also introduce a mapping function to quantify the computational load of edge servers under different combinations of three key parameters: size of the dataset, local batch size, and number of local training passes. Finally, we adopt an Alternative Direction Method of Multipliers (ADMM)-based approach to jointly optimize energy cost of the IoT network and average computing resource utilization of edge servers. We prove that our proposed algorithm does not cause any data leakage nor disclose any topological information of the IoT network. Simulation results show that our proposed framework significantly improves the resource efficiency of the IoT network and edge servers with only a limited sacrifice on the model convergence performance.
引用
收藏
页码:86 / 92
页数:7
相关论文
共 13 条
[1]   Dynamic Task Offloading and Scheduling for Low-Latency IoT Services in Multi-Access Edge Computing [J].
Alameddine, Hyame Assem ;
Sharafeddine, Sanaa ;
Sebbah, Samir ;
Ayoubi, Sara ;
Assi, Chadi .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2019, 37 (03) :668-682
[2]  
Bianchi G, 1996, PIMRC'96 - THE SEVENTH IEEE INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS, PROCEEDINGS, VOLS 1-3, P392, DOI 10.1109/PIMRC.1996.567423
[3]  
Bonawitz K, 2019, Proc. Mach. Learn. Syst
[4]   Distributed optimization and statistical learning via the alternating direction method of multipliers [J].
Boyd S. ;
Parikh N. ;
Chu E. ;
Peleato B. ;
Eckstein J. .
Foundations and Trends in Machine Learning, 2010, 3 (01) :1-122
[5]   Fundamental Technologies in Modern Speech Recognition [J].
Furui, Sadaoki ;
Deng, Li ;
Gales, Mark ;
Ney, Hermann ;
Tokuda, Keiichi .
IEEE SIGNAL PROCESSING MAGAZINE, 2012, 29 (06) :16-17
[6]  
Hirzallah M., 2018, 2018 IEEE INT S DYNA, P1
[7]  
Lai L., 2018, ARXIV PREPRINT 18060
[8]  
McMahan HB, 2017, PR MACH LEARN RES, V54, P1273
[9]  
Tu YW, 2020, IEEE INFOCOM SER, P2509, DOI [10.1109/infocom41043.2020.9155372, 10.1109/INFOCOM41043.2020.9155372]
[10]  
Wang H, 2020, IEEE INFOCOM SER, P1698, DOI [10.1109/INFOCOM41043.2020.9155494, 10.1109/infocom41043.2020.9155494]