Decentralized Scheduling for Concurrent Tasks in Mobile Edge Computing via Deep Reinforcement Learning

被引:14
作者
Fan, Ye [1 ,2 ]
Ge, Jidong [1 ,2 ]
Zhang, Sheng [1 ]
Wu, Jie [3 ]
Luo, Bin [1 ,2 ]
机构
[1] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing 210008, Jiangsu, Peoples R China
[2] Nanjing Univ, Software Inst, Nanjing 210008, Jiangsu, Peoples R China
[3] Temple Univ, Ctr Networked Comp, Philadelphia, PA 19122 USA
关键词
Task analysis; Servers; Costs; Heuristic algorithms; Training; Scheduling algorithms; Deep learning; Deep reinforcement learning; deep q-learning; mobile edge computing; resource allocation; task offloading;
D O I
10.1109/TMC.2023.3266226
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Mobile Edge Computing (MEC) is a promising solution to enhance the computing capability of resource-limited networks. A fundamental problem in MEC is efficiently offloading tasks from user devices to edge servers. However, there still exists a gap to deploy in real-world environments: 1) traditional centralized approaches needs complete information of edge network, ignoring the communication costs generated by synchronization, 2) previous works do not consider concurrent computation on edge servers, which may cause dynamic changes in the environment, and 3) the scheduling algorithm should deliver individualized decisions for different users independently and with high efficiency To solve this mismatch, we studied a multi-user task offloading problem where user devices make offloading decisions independently. We consider the concurrent execution of tasks and formulate a non-divisible and delay-aware task offloading problem to jointly minimize the dropped task ratio and long-term latency. We propose a decentralized task scheduling algorithm based on DRL that makes offloading decisions without knowing the information of other user devices. We employ Double-DQN, Dueling-DQN, Prioritized Replay Memory, and Recurrent Neural Network (RNN) techniques to improve the algorithm's performance. The results of simulation experiments show that our method can significantly reduce the long-term latency and dropped task ratio compared to the baseline algorithms.
引用
收藏
页码:2765 / 2779
页数:15
相关论文
共 33 条
[1]   Efficient Multi-User Computation Offloading for Mobile-Edge Cloud Computing [J].
Chen, Xu ;
Jiao, Lei ;
Li, Wenzhong ;
Fu, Xiaoming .
IEEE-ACM TRANSACTIONS ON NETWORKING, 2016, 24 (05) :2827-2840
[2]   Decentralized computation offloading for multi-user mobile edge computing: a deep reinforcement learning approach [J].
Chen, Zhao ;
Wang, Xiaodong .
EURASIP JOURNAL ON WIRELESS COMMUNICATIONS AND NETWORKING, 2020, 2020 (01)
[3]  
Cho K., 2014, LEARNING PHRASE REPR, DOI DOI 10.3115/V1/D14-1179
[4]   Architecture and performance evaluation of distributed computation offloading in edge computing [J].
Cicconetti, Claudio ;
Conti, Marco ;
Passarella, Andrea .
SIMULATION MODELLING PRACTICE AND THEORY, 2020, 101
[5]   A Potential Game Theoretic Approach to Computation Offloading Strategy Optimization in End-Edge-Cloud Computing [J].
Ding, Yan ;
Li, Kenli ;
Liu, Chubo ;
Li, Keqin .
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (06) :1503-1519
[6]  
Eshraghi N, 2019, IEEE INFOCOM SER, P1414, DOI [10.1109/infocom.2019.8737559, 10.1109/INFOCOM.2019.8737559]
[7]  
Fakoor R., 2020, P INT C LEARN REPR
[8]  
Finn C, 2017, PR MACH LEARN RES, V70
[9]   A Distributed Deep Reinforcement Learning Technique for Application Placement in Edge and Fog Computing Environments [J].
Goudarzi, Mohammad ;
Palaniswami, Marimuthu ;
Buyya, Rajkumar .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2023, 22 (05) :2491-2505
[10]  
Hausknecht M., 2015, 2015 AAAI FALL S SER