Deep-Reinforcement-Learning-Based Joint Optimization of Task Migration and Resource Allocation for Mobile-Edge Computing

被引:0
作者
Li, Juncai [1 ]
Jiang, Qi [1 ]
Leung, Victor C. M. [2 ,3 ]
Ma, Zhuo [1 ]
Abrokwa, Kofi Kwarteng [1 ]
机构
[1] Hefei Univ Technol, Sch Elect Engn & Automat, Hefei 230009, Peoples R China
[2] Shenzhen Univ, Coll Comp Sci & Software Engn, Shenzhen 518060, Peoples R China
[3] Univ British Columbia, Dept Elect & Comp Engn, Vancouver, BC V6T 1Z4, Canada
基金
中国国家自然科学基金;
关键词
Resource management; Multi-access edge computing; Optimization; Internet of Things; Dynamic scheduling; Costs; Cloud computing; Processor scheduling; Markov decision processes; Deep reinforcement learning; Deep reinforcement learning (DRL); Markov decision processes (MDPs); mobile-edge computing; resource allocation; task migration; INTERNET; IOT; MEC;
D O I
10.1109/JIOT.2025.3555503
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Task migration and resource allocation are essential to integrate available resources for improving the efficiency of mobile-edge computing to support various computation-intensive and delay-sensitive Internet of Things applications, which cooperative optimization has yet been well addressed to achieve desirable performance. In this article, a deep reinforcement learning (DRL)-based adaptive cooperative optimization strategy is presented to fill this gap. Task migration and resource allocation are jointly employed to adaptively integrate available resources within cooperative edge nodes for processing various randomly offloaded tasks. The policy optimization to minimize the system cost and task dropout rates is formulated as Markov decision processes. A DRL algorithm that enhances deep deterministic policy gradient with a dual experience pool is proposed to jointly optimize the task migration and resource allocation in unknown stochastic application environments. Simulation experiments have been conducted to evaluate the performance of the presented strategy, and the results illustrate that it increases system rewards by 17.9%-61.5% and reduces task dropout rate by 5.2%-31.3% comparing with benchmarks.
引用
收藏
页码:24431 / 24440
页数:10
相关论文
共 27 条
[21]  
Silver D, 2014, PR MACH LEARN RES, V32
[22]   Toward Enabled Industrial Verticals in 5G: A Survey on MEC-Based Approaches to Provisioning and Flexibility [J].
Spinelli, Francesco ;
Mancuso, Vincenzo .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2021, 23 (01) :596-630
[23]   Deep Reinforcement Learning for Task Offloading in Mobile Edge Computing Systems [J].
Tang, Ming ;
Wong, Vincent W. S. .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2022, 21 (06) :1985-1997
[24]   Accuracy-Guaranteed Collaborative DNN Inference in Industrial IoT via Deep Reinforcement Learning [J].
Wu, Wen ;
Yang, Peng ;
Zhang, Weiting ;
Zhou, Conghao ;
Shen, Xuemin .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2021, 17 (07) :4988-4998
[25]   Offloading and Resource Allocation With General Task Graph in Mobile Edge Computing: A Deep Reinforcement Learning Approach [J].
Yan, Jia ;
Bi, Suzhi ;
Zhang, Ying-Jun Angela .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2020, 19 (08) :5404-5419
[26]   Hybrid learning based service migration for cost minimization with deadlines in multi-user mobile edge computing systems [J].
Yu, Hao ;
Zhang, Qiang .
COMPUTER NETWORKS, 2024, 242
[27]   MR-DRO: A Fast and Efficient Task Offloading Algorithm in Heterogeneous Edge/Cloud Computing Environments [J].
Zhang, Ziru ;
Wang, Nianfu ;
Wu, Huaming ;
Tang, Chaogang ;
Li, Ruidong .
IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (04) :3165-3178