Deep reinforcement learning based mobility management in a MEC-Enabled cellular IoT network

被引:0
|
作者
Kabir, Homayun [1 ]
Tham, Mau-Luen [1 ]
Chang, Yoong Choon [1 ]
Chow, Chee-Onn [2 ]
机构
[1] Univ Tunku Abdul Rahman, Lee Kong Chian Fac Engn & Sci, Dept Elect & Elect Engn, Sungai Long Campus, Selangor 43000, Malaysia
[2] Univ Malaya, Fac Engn, Dept Elect Engn, Malaya 50603, Malaysia
关键词
Handover management; Edge computing; CIoT; Deep reinforcement learning; Parametrized deep Q network; EDGE; HANDOVER; ALLOCATION; INTERNET;
D O I
10.1016/j.pmcj.2024.101987
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Mobile Edge Computing (MEC) has paved the way for new Cellular Internet of Things (CIoT) paradigm, where resource constrained CIoT Devices (CDs) can offload tasks to a computing server located at either a Base Station (BS) or an edge node. For CDs moving in high speed, seamless mobility is crucial during the MEC service migration from one base station (BS) to another. In this paper, we investigate the problem of joint power allocation and Handover (HO) management in a MEC network with a Deep Reinforcement Learning (DRL) approach. To handle the hybrid action space (continuous: power allocation and discrete: HO decision), we leverage Parameterized Deep Q-Network (P-DQN) to learn the near-optimal solution. Simulation results illustrate that the proposed algorithm (P-DQN) outperforms the conventional approaches, such as the nearest BS +random power and random BS +random power, in terms of reward, HO cost, and total power consumption. According to simulation results, HO occurs almost in the edge point of two BS, which means the HO is almost perfectly managed. In addition, the total power consumption is around 0.151 watts in P-DQN while it is about 0.75 watts in nearest BS +random power and random BS +random power.
引用
收藏
页数:17
相关论文
共 50 条
  • [41] Cache Sharing in UAV-Enabled Cellular Network: A Deep Reinforcement Learning-Based Approach
    Muslih, Hamidullah
    Kazmi, S. M. Ahsan
    Mazzara, Manuel
    Baye, Gaspard
    IEEE ACCESS, 2024, 12 : 43422 - 43435
  • [42] A Dueling DQN-Based Computational Offloading Method in MEC-Enabled IIoT Network
    Hsu, Ching-Kuo
    COMPUTER JOURNAL, 2023, 66 (12): : 2887 - 2896
  • [43] HeteFL: Network-Aware Federated Learning Optimization in Heterogeneous MEC-Enabled Internet of Things
    He, Jing
    Guo, Songtao
    Qiao, Dewen
    Yi, Lin
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (15) : 14073 - 14086
  • [44] Offloading dependent tasks in MEC-enabled IoT systems: A preference-based hybrid optimization method
    Sadatdiynov, Kuanishbay
    Cui, Laizhong
    Huang, Joshua Zhexue
    PEER-TO-PEER NETWORKING AND APPLICATIONS, 2023, 16 (02) : 657 - 674
  • [45] Quality-of-Experience-Aware Computation Offloading in MEC-Enabled Blockchain-Based IoT Networks
    Hosseinpour, Mahsa
    Moghaddam, Mohammad Hossein Yaghmaee
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (08): : 14483 - 14493
  • [46] Meta Reinforcement Learning-Based Computation Offloading in RIS-aided MEC-enabled Cell-Free RAN
    Lu, Yi
    Jiang, Yanxiang
    Zhang, Lingling
    Bennis, Mehdi
    Niyato, Dusit
    You, Xiaohu
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 5370 - 5376
  • [47] Offloading dependent tasks in MEC-enabled IoT systems: A preference-based hybrid optimization method
    Kuanishbay Sadatdiynov
    Laizhong Cui
    Joshua Zhexue Huang
    Peer-to-Peer Networking and Applications, 2023, 16 : 657 - 674
  • [48] Deep Reinforcement Learning for Resource Management in Blockchain-Enabled Federated Learning Network
    Hieu, Nguyen Quang
    Tran, The Anh
    Nguyen, Cong Luong
    Niyato, Dusit
    Kim, Dong In
    Elmroth, Erik
    IEEE Networking Letters, 2022, 4 (03): : 137 - 141
  • [49] Network Slicing with MEC and Deep Reinforcement Learning for the Internet of Vehicles
    Mlika, Zoubeir
    Cherkaoui, Soumaya
    IEEE NETWORK, 2021, 35 (03): : 132 - 138
  • [50] Delay-aware Secure Transmission in MEC-enabled Multicast Network
    Xu, Qian
    Ren, Pinyi
    2020 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA (ICCC), 2020, : 1262 - 1267