Intelligent Task Offloading and Energy Allocation in the UAV-Aided Mobile Edge-Cloud Continuum

被引:31
作者
Cheng, Zhipeng [1 ]
Gao, Zhibin [1 ]
Liwang, Minghui [2 ]
Huang, Lianfen [3 ]
Du, Xiaojiang [4 ]
Guizani, Mohsen [5 ]
机构
[1] Xiamen Univ, Commun Engn, Xiamen, Peoples R China
[2] Xiamen Univ, Xiamen, Peoples R China
[3] Xiamen Univ, Dept Commun Engn, Xiamen, Peoples R China
[4] Stevens Inst Technol, Dept Elect & Comp Engn, Hoboken, NJ 07030 USA
[5] Qatar Univ, Dept Comp Sci & Engn, Doha, Qatar
来源
IEEE NETWORK | 2021年 / 35卷 / 05期
基金
中国国家自然科学基金;
关键词
Training data; Privacy; Power lasers; Reinforcement learning; Unmanned aerial vehicles; Resource management; Servers; Edge computing; Cloud computing; COMMUNICATION;
D O I
10.1109/MNET.010.2100025
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The arrival of big data and the Internet of Things (IoT) era greatly promotes innovative in-network computing techniques, where the edge-cloud continuum becomes a feasible paradigm in handling multi-dimensional resources such as computing, storage, and communication. In this article, an energy constrained unmanned aerial vehicle (UAV)-aided mobile edge-cloud continuum framework is introduced, where the offloaded tasks from ground IoT devices can be cooperatively executed by UAVs acts as an edge server and cloud server connected to a ground base station (GBS), which can be seen as an access point. Specifically, a UAV is powered by the laser beam transmitted from a GBS, and can further charge IoT devices wirelessly. Here, an interesting task offloading and energy allocation problem is investigated by maximizing the long-term reward subject to executed task size and execution delay, under constraints such as energy causality, task causality, and cache causality. A federated deep reinforcement learning (FDRL) framework is proposed to learn the joint task offloading and energy allocation decision while reducing the training cost and preventing privacy leakage of DRL training. Numerical simulations are conducted to verify the effectiveness of our proposed scheme as compared to three baseline schemes.
引用
收藏
页码:42 / 49
页数:8
相关论文
共 14 条
[1]  
Chen K., 2019, QUANTIFYING PERFORMA
[2]  
Du X., 2004, Ad Hoc Networks, V2, P241
[3]   Adaptive cell relay routing protocol for mobile ad hoc networks [J].
Du, XJ ;
Wu, DP .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2006, 55 (01) :278-285
[4]   Wireless-Powered Edge Computing With Cooperative UAV: Task, Time Scheduling and Trajectory Design [J].
Hu, Xiaoyan ;
Wong, Kai-Kit ;
Zhang, Yangyang .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2020, 19 (12) :8083-8098
[5]   Stochastic Geometry-Based Analysis of Airborne Base Stations With Laser-Powered UAVs [J].
Lahmeri, Mohamed-Amine ;
Kishk, Mustafa A. ;
Alouini, Mohamed-Slim .
IEEE COMMUNICATIONS LETTERS, 2020, 24 (01) :173-177
[6]  
Lillicrap TP, 2015, Continuous control with deep reinforcement learning
[7]   Wireless Networks With RF Energy Harvesting: A Contemporary Survey [J].
Lu, Xiao ;
Wang, Ping ;
Niyato, Dusit ;
Kim, Dong In ;
Han, Zhu .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2015, 17 (02) :757-789
[8]   UAV-AIDED CELLULAR COMMUNICATIONS WITH DEEP REINFORCEMENT LEARNING AGAINST JAMMING [J].
Lu, Xiaozhen ;
Xiao, Liang ;
Dai, Canhuang ;
Dai, Huaiyu .
IEEE WIRELESS COMMUNICATIONS, 2020, 27 (04) :48-53
[9]   A Survey on Mobile Edge Computing: The Communication Perspective [J].
Mao, Yuyi ;
You, Changsheng ;
Zhang, Jun ;
Huang, Kaibin ;
Letaief, Khaled B. .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2017, 19 (04) :2322-2358
[10]  
McMahan HB, 2017, PR MACH LEARN RES, V54, P1273