Deep reinforcement learning based mobility management in a MEC-Enabled cellular IoT network

被引:0
作者
Kabir, Homayun [1 ]
Tham, Mau-Luen [1 ]
Chang, Yoong Choon [1 ]
Chow, Chee-Onn [2 ]
机构
[1] Univ Tunku Abdul Rahman, Lee Kong Chian Fac Engn & Sci, Dept Elect & Elect Engn, Sungai Long Campus, Selangor 43000, Malaysia
[2] Univ Malaya, Fac Engn, Dept Elect Engn, Malaya 50603, Malaysia
关键词
Handover management; Edge computing; CIoT; Deep reinforcement learning; Parametrized deep Q network; EDGE; HANDOVER; ALLOCATION; INTERNET;
D O I
10.1016/j.pmcj.2024.101987
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Mobile Edge Computing (MEC) has paved the way for new Cellular Internet of Things (CIoT) paradigm, where resource constrained CIoT Devices (CDs) can offload tasks to a computing server located at either a Base Station (BS) or an edge node. For CDs moving in high speed, seamless mobility is crucial during the MEC service migration from one base station (BS) to another. In this paper, we investigate the problem of joint power allocation and Handover (HO) management in a MEC network with a Deep Reinforcement Learning (DRL) approach. To handle the hybrid action space (continuous: power allocation and discrete: HO decision), we leverage Parameterized Deep Q-Network (P-DQN) to learn the near-optimal solution. Simulation results illustrate that the proposed algorithm (P-DQN) outperforms the conventional approaches, such as the nearest BS +random power and random BS +random power, in terms of reward, HO cost, and total power consumption. According to simulation results, HO occurs almost in the edge point of two BS, which means the HO is almost perfectly managed. In addition, the total power consumption is around 0.151 watts in P-DQN while it is about 0.75 watts in nearest BS +random power and random BS +random power.
引用
收藏
页数:17
相关论文
共 65 条
[1]   Handover authentication latency reduction using mobile edge computing and mobility patterns [J].
Abdullah, Fatima ;
Kimovski, Dragi ;
Prodan, Radu ;
Munir, Kashif .
COMPUTING, 2021, 103 (11) :2667-2686
[2]   A Survey on Handover Optimization in Beyond 5G Mobile Networks: Challenges and Solutions [J].
Alraih, Saddam ;
Nordin, Rosdiadee ;
Abu-Samah, Asma ;
Shayea, Ibraheem ;
Abdullah, Nor Fadzilah .
IEEE ACCESS, 2023, 11 :59317-59345
[3]  
Alvarez L, 1997, IFAC SYMP SERIES, P65
[4]   Handover Management in Dense Cellular Networks: A Stochastic Geometry Approach [J].
Arshad, Rabe ;
ElSawy, Hesham ;
Sorour, Sameh ;
Al-Naffouri, Tareq Y. ;
Alouini, Mohamed-Slim .
2016 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2016, :303-309
[5]   Deep Reinforcement Learning A brief survey [J].
Arulkumaran, Kai ;
Deisenroth, Marc Peter ;
Brundage, Miles ;
Bharath, Anil Anthony .
IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (06) :26-38
[6]   Flow-level performance and capacity of wireless networks with user mobility [J].
Bonald, Thomas ;
Borst, Sem ;
Hegde, Nidhi ;
Jonckheere, Matthieu ;
Proutiere, Alexandre .
QUEUEING SYSTEMS, 2009, 63 (1-4) :131-164
[7]   Traffic Signal Control Using Hybrid Action Space Deep Reinforcement Learning [J].
Bouktif, Salah ;
Cheniki, Abderraouf ;
Ouni, Ali .
SENSORS, 2021, 21 (07)
[8]  
Chen JQ, 2018, Arxiv, DOI arXiv:1812.01797
[9]   Task Offloading for Mobile Edge Computing in Software Defined Ultra-Dense Network [J].
Chen, Min ;
Hao, Yixue .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2018, 36 (03) :587-597
[10]   Decentralized computation offloading for multi-user mobile edge computing: a deep reinforcement learning approach [J].
Chen, Zhao ;
Wang, Xiaodong .
EURASIP JOURNAL ON WIRELESS COMMUNICATIONS AND NETWORKING, 2020, 2020 (01)