Distributed deep reinforcement learning for independent task offloading in Mobile Edge Computing

被引:0
作者
Darchini-Tabrizi, Mohsen [1 ]
Roudgar, Amirhossein [1 ]
Entezari-Maleki, Reza [1 ,2 ,3 ]
Sousa, Leonel [3 ]
机构
[1] Iran Univ Sci & Technol, Sch Comp Engn, Tehran, Iran
[2] Inst Res Fundamental Sci IPM, Sch Comp Sci, Tehran, Iran
[3] Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal
关键词
Mobile Edge Computing; Task offloading; Task size prediction; Deep reinforcement learning;
D O I
10.1016/j.jnca.2025.104211
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Mobile Edge Computing (MEC) has been identified as an innovative paradigm to improve the performance and efficiency of mobile applications by offloading computation-intensive tasks to nearby edge servers. However, the effective implementation of task offloading in MEC systems faces challenges due to uncertainty, heterogeneity, and dynamicity. Deep Reinforcement Learning (DRL) provides a powerful approach for devising optimal task offloading policies in complex and uncertain environments. This paper presents a DRL-based task offloading approach using Deep Deterministic Policy Gradient (DDPG) and Distributed Distributional Deep Deterministic Policy Gradient (D4PG) algorithms. The proposed solution establishes a distributed system, where multiple mobile devices act as Reinforcement Learning (RL) agents to optimize their individual performance. To reduce the computational complexity of the neural networks, Gated Recurrent Units (GRU) are used instead of Long Short-Term Memory (LSTM) units to predict the load of edge nodes within the observed state. In addition, a GRU-based sequencing model is introduced to estimate task sizes in specific scenarios where these sizes are unknown. Finally, a novel scheduling algorithm is proposed that outperforms commonly used approaches by leveraging the estimated task sizes to achieve superior performance. Comprehensive simulations were conducted to evaluate the efficacy of the proposed approach, benchmarking it against multiple baseline and state-of-the-art algorithms. Results show significant improvements in terms of average processing delay and task drop rates, thereby confirming the success of the proposed approach.
引用
收藏
页数:16
相关论文
共 56 条
[21]   Genetic Algorithm-Based Optimization of Offloading and Resource Allocation in Mobile-Edge Computing [J].
Li, Zhi ;
Zhu, Qi .
INFORMATION, 2020, 11 (02)
[22]   DRL-OS: A Deep Reinforcement Learning-Based Offloading Scheduler in Mobile Edge Computing [J].
Lim, Ducsun ;
Lee, Wooyeob ;
Kim, Won-Tae ;
Joe, Inwhee .
SENSORS, 2022, 22 (23)
[23]  
Limin J., 2023, 20 INT COMP C WAV AC, P1
[24]   Task graph offloading via deep reinforcement learning in mobile edge computing [J].
Liu, Jiagang ;
Mi, Yun ;
Zhang, Xinyu ;
Li, Xiaocui .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 158 :545-555
[25]  
Liu J, 2016, IEEE INT SYMP INFO, P1451, DOI 10.1109/ISIT.2016.7541539
[26]  
Liu Yujiong, 2018, IEEE INT C COMMUNICA
[27]   Computation Off-Loading in Resource-Constrained Edge Computing Systems Based on Deep Reinforcement Learning [J].
Luo, Chuanwen ;
Zhang, Jian ;
Cheng, Xiaolu ;
Hong, Yi ;
Chen, Zhibo ;
Xing, Xiaoshuang .
IEEE TRANSACTIONS ON COMPUTERS, 2024, 73 (01) :109-122
[28]   A Survey on Mobile Edge Computing: The Communication Perspective [J].
Mao, Yuyi ;
You, Changsheng ;
Zhang, Jun ;
Huang, Kaibin ;
Letaief, Khaled B. .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2017, 19 (04) :2322-2358
[29]   Stochastic Joint Radio and Computational Resource Management for Multi-User Mobile-Edge Computing Systems [J].
Mao, Yuyi ;
Zhang, Jun ;
Song, S. H. ;
Letaief, Khaled B. .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2017, 16 (09) :5994-6009
[30]   Discontinuous Computation Offloading for Energy-Efficient Mobile Edge Computing [J].
Merluzzi, Mattia ;
di Pietro, Nicola ;
Di Lorenzo, Paolo ;
Strinati, Emilio Calvanese ;
Barbarossa, Sergio .
IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, 2022, 6 (02) :1242-1257