Joint Computation Offloading and Resource Allocation for D2D-Assisted Mobile Edge Computing

被引:25
作者
Jiang, Wei [1 ]
Feng, Daquan [1 ]
Sun, Yao [2 ]
Feng, Gang [3 ,4 ]
Wang, Zhenzhong [5 ]
Xia, Xiang-Gen [6 ]
机构
[1] Shenzhen Univ, Coll Elect & Informat Engn, Shenzhen Key Lab Digital Creat Technol, Guangdong Prov Engn Lab Digital Creat Technol,Guan, Shenzhen 518060, Guangdong, Peoples R China
[2] Univ Glasgow, James Watt Sch Engn, Glasgow G12 8QQ, Scotland
[3] Univ Elect Sci & Technol China, Yangtze Delta Reg Inst Huzhou, Huzhou 313001, Peoples R China
[4] Univ Elect Sci & Technol China, Natl Key Lab Sci & Technol Commun, Chengdu 611731, Peoples R China
[5] China Media Grp, Tech Management Ctr, Beijing 100020, Peoples R China
[6] Univ Delaware, Dept Elect & Comp Engn, Newark, DE 19716 USA
关键词
Task analysis; Servers; Resource management; Device-to-device communication; Wireless communication; Mobile handsets; Energy consumption; Computation offloading; deep reinforcement learning; Index Terms; device-to-device; mobile edge computing; resource allocation; NETWORKS; MINIMIZATION; INTERNET; FOG;
D O I
10.1109/TSC.2022.3190276
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Computation offloading via device-to-device communications can improve the performance of mobile edge computing by exploiting the computing resources of user devices. However, most proposed optimization-based computation offloading schemes lack self-adaptive abilities in dynamic environments due to time-varying wireless environment, continuous-discrete mixed actions, and coordination among devices. The conventional reinforcement learning based approaches are not effective for solving an optimal sequential decision problem with continuous-discrete mixed actions. In this paper, we propose a hierarchical deep reinforcement learning (HDRL) framework to solve the joint computation offloading and resource allocation problem. The proposed HDRL framework has a hierarchical actor-critic architecture with a meta critic, multiple basic critics and actors. Specifically, a combination of deep Q-network (DQN) and deep deterministic policy gradient (DDPG) is exploited to cope with the continuous-discrete mixed action spaces. Furthermore, to handle the coordination among devices, the meta critic acts as a DQN to output the joint discrete action of all devices and each basic critic acts as the critic part of DDPG to evaluate the output of the corresponding actor. Simulation results show that the proposed HDRL algorithm can significantly reduce the task computation latency compared with baseline offloading schemes.
引用
收藏
页码:1949 / 1963
页数:15
相关论文
共 42 条
[1]  
[Anonymous], 2020, MOB APPL MARK SIZ SH
[2]   Task Execution Cost Minimization-Based Joint Computation Offloading and Resource Allocation for Cellular D2D MEC Systems [J].
Chai, Rong ;
Lin, Junliang ;
Chen, Minglong ;
Chen, Qianbin .
IEEE SYSTEMS JOURNAL, 2019, 13 (04) :4110-4121
[3]   D2D Task Offloading: A Dataset-Based Q&A [J].
Chatzopoulos, Dimitris ;
Bermejo, Carlos ;
ul Haq, Ehsan ;
Li, Yong ;
Hui, Pan .
IEEE COMMUNICATIONS MAGAZINE, 2019, 57 (02) :102-107
[4]   iRAF: A Deep Reinforcement Learning Approach for Collaborative Mobile Edge Computing IoT Networks [J].
Chen, Jienan ;
Chen, Siyu ;
Wang, Qi ;
Cao, Bin ;
Feng, Gang ;
Hu, Jianhao .
IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (04) :7011-7024
[5]   Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning [J].
Chen, Xianfu ;
Zhang, Honggang ;
Wu, Celimuge ;
Mao, Shiwen ;
Ji, Yusheng ;
Bennis, Mehdi .
IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (03) :4005-4018
[6]   Efficient Multi-User Computation Offloading for Mobile-Edge Cloud Computing [J].
Chen, Xu ;
Jiao, Lei ;
Li, Wenzhong ;
Fu, Xiaoming .
IEEE-ACM TRANSACTIONS ON NETWORKING, 2016, 24 (05) :2827-2840
[7]   Space/Aerial-Assisted Computing Offloading for IoT Applications: A Learning-Based Approach [J].
Cheng, Nan ;
Lyu, Feng ;
Quan, Wei ;
Zhou, Conghao ;
He, Hongli ;
Shi, Weisen ;
Shen, Xuemin .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2019, 37 (05) :1117-1129
[8]   Trajectory Design and Access Control for Air-Ground Coordinated Communications System With Multiagent Deep Reinforcement Learning [J].
Ding, Ruijin ;
Xu, Yadong ;
Gao, Feifei ;
Shen, Xuemin .
IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (08) :5785-5798
[9]   Learning for Computation Offloading in Mobile Edge Computing [J].
Dinh, Thinh Quang ;
La, Quang Duy ;
Quek, Tony Q. S. ;
Shin, Hyundong .
IEEE TRANSACTIONS ON COMMUNICATIONS, 2018, 66 (12) :6353-6367
[10]   Joint Task Offloading, D2D Pairing, and Resource Allocation in Device-Enhanced MEC: A Potential Game Approach [J].
Fang, Tao ;
Yuan, Feng ;
Ao, Liang ;
Chen, Jiaxin .
IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (05) :3226-3237