Deep reinforcement learning-based resource allocation in multi-access edge computing

被引:13
作者
Khani, Mohsen [1 ]
Sadr, Mohammad Mohsen [2 ]
Jamali, Shahram [3 ]
机构
[1] Islamic Azad Univ, Semnan Branch, Dept Comp Engn, Semnan, Iran
[2] Payame Noor Univ, Dept Comp & Informat Technol Engn, Tehran, Iran
[3] Univ Mohaghegh Ardabili, Dept Comp Engn, Ardebil, Iran
关键词
5G; deep reinforcement learning; mobile edge computing (MEC); network slicing; resource allocation; TO-END NETWORK; INDUSTRIAL INTERNET; 5G; DRL; MEC; OPTIMIZATION; SIMULATION; MANAGEMENT; THINGS; CLOUD;
D O I
10.1002/cpe.7995
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Network architects and engineers face challenges in meeting the increasing complexity and low-latency requirements of various services. To tackle these challenges, multi-access edge computing (MEC) has emerged as a solution, bringing computation and storage resources closer to the network's edge. This proximity enables low-latency data access, reduces network congestion, and improves quality of service. Effective resource allocation is crucial for leveraging MEC capabilities and overcoming limitations. However, traditional approaches lack intelligence and adaptability. This study explores the use of deep reinforcement learning (DRL) as a technique to enhance resource allocation in MEC. DRL has gained significant attention due to its ability to adapt to changing network conditions and handle complex and dynamic environments more effectively than traditional methods. The study presents the results of applying DRL for efficient and dynamic resource allocation in MEC Computing, optimizing allocation decisions based on real-time environment and user demands. By providing an overview of the current research on resource allocation in MEC using DRL, including components, algorithms, and the performance metrics of various DRL-based schemes, this review article demonstrates the superiority of DRL-based resource allocation schemes over traditional methods in diverse MEC conditions. The findings highlight the potential of DRL-based approaches in addressing challenges associated with resource allocation in MEC.
引用
收藏
页数:29
相关论文
共 139 条
[1]   Design Considerations for a 5G Network Architecture [J].
Agyapong, Patrick Kwadwo ;
Iwamura, Mikio ;
Staehle, Dirk ;
Kiess, Wolfgang ;
Benjebbour, Anass .
IEEE COMMUNICATIONS MAGAZINE, 2014, 52 (11) :65-75
[2]  
Alajmi A., 2023, IEEE T VEH TECHNOL, P10119
[3]   Multi-Access Edge Computing Architecture, Data Security and Privacy: A Review [J].
Ali, Belal ;
Gregory, Mark A. ;
Li, Shuo .
IEEE ACCESS, 2021, 9 (09) :18706-18721
[4]  
[Anonymous], 2015, 2015 IEEE 2nd World Forum on Internet of Things (WF-IoT)
[5]   Deep reinforcement learning for inventory control: A roadmap [J].
Boute, Robert N. ;
Gijsbrechts, Joren ;
van Jaarsveld, Willem ;
Vanvuchelen, Nathalie .
EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2022, 298 (02) :401-412
[6]   CloudSim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms [J].
Calheiros, Rodrigo N. ;
Ranjan, Rajiv ;
Beloglazov, Anton ;
De Rose, Cesar A. F. ;
Buyya, Rajkumar .
SOFTWARE-PRACTICE & EXPERIENCE, 2011, 41 (01) :23-50
[7]   Mobility-Aware Multiobjective Task Offloading for Vehicular Edge Computing in Digital Twin Environment [J].
Cao, Bin ;
Li, Ziming ;
Liu, Xin ;
Lv, Zhihan ;
He, Hua .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2023, 41 (10) :3046-3055
[8]   Edge-Cloud Resource Scheduling in Space-Air-Ground-Integrated Networks for Internet of Vehicles [J].
Cao, Bin ;
Zhang, Jintong ;
Liu, Xin ;
Sun, Zhiheng ;
Cao, Wenxi ;
Nowak, Robert M. ;
Lv, Zhihan .
IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (08) :5765-5772
[9]   Large-Scale Many-Objective Deployment Optimization of Edge Servers [J].
Cao, Bin ;
Fan, Shanshan ;
Zhao, Jianwei ;
Tian, Shan ;
Zheng, Zihao ;
Yan, Yanlong ;
Yang, Peng .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 22 (06) :3841-3849
[10]  
Cappiello AG, 2019, 2019 INTERNATIONAL SYMPOSIUM ON SIGNALS, CIRCUITS AND SYSTEMS (ISSCS 2019), DOI [10.1109/COMST.2019.2914030, 10.1109/isscs.2019.8801767]