Offline Reinforcement Learning for Asynchronous Task Offloading in Mobile Edge Computing

被引:2
|
作者
Zhang, Bolei [1 ]
Xiao, Fu [1 ]
Wu, Lifa [1 ]
机构
[1] Nanjing Univ Posts & Telecommun, Sch Comp, Nanjing 210023, Jiangsu, Peoples R China
来源
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT | 2024年 / 21卷 / 01期
关键词
Task offloading; mobile edge computing; offline reinforcement learning; mobile sensing; RESOURCE-ALLOCATION;
D O I
10.1109/TNSM.2023.3316626
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Edge servers, which are located in close proximity to mobile users, have become key components for providing augmented computation and bandwidth. As the resources of edge servers are limited and shared, it is critical for the decentralized mobile users to determine the amount of offloaded workload, to avoid competition or waste of the public resources at the edge servers. Reinforcement learning (RL) methods, which are sequential and model-free, have been widely considered as a promising approach. However, directly deploying RL in edge computing remains elusive, since arbitrary exploration in real online environments often leads to poor user experience. To avoid the costly interactions, in this paper, we propose an offline RL framework which can be optimized by using a static offline dataset only. In essence, our method first trains a supervised offline model to simulate the edge computing environment dynamics, and then optimize the offloading policy in the offline environment with cost-free interactions. As the offloading requests are mostly asynchronous, we adopt a mean-field approach that treats all neighboring users as a single agent. The problem can then be simplified and reduced to a game between only two players. Moreover, we limit the length of the offline model rollout to ensure the simulated trajectories are accurate, so that the trained offloading policies can be generalized to unseen online environments. Theoretical analyses are conducted to validate the accuracy and convergence of our algorithm. In the experiments, we first train the offline simulation environment with a real historical data set, and then optimize the offloading policy in this environment model. The results show that our algorithm can converge very fast during training. In the execution, the algorithm still achieves high performance in the online environment.
引用
收藏
页码:939 / 952
页数:14
相关论文
共 50 条
  • [21] Dynamic task offloading for Internet of Things in mobile edge computing via deep reinforcement learning
    Chen, Ying
    Gu, Wei
    Li, Kaixin
    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, 2022,
  • [22] Privacy-preserving task offloading in mobile edge computing: A deep reinforcement learning approach
    Xia, Fanglue
    Chen, Ying
    Huang, Jiwei
    SOFTWARE-PRACTICE & EXPERIENCE, 2024, 54 (09) : 1774 - 1792
  • [23] Dependency-Aware Dynamic Task Offloading Based on Deep Reinforcement Learning in Mobile-Edge Computing
    Fang, Juan
    Qu, Dezheng
    Chen, Huijie
    Liu, Yaqi
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2024, 21 (02): : 1403 - 1415
  • [24] Learning-Based Task Offloading for Mobile Edge Computing
    Garaali, Rim
    Chaieb, Cirine
    Ajib, Wessam
    Afif, Meriem
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 1659 - 1664
  • [25] Deep Reinforcement Learning for Task Offloading in Edge Computing
    Xie, Bo
    Cui, Haixia
    2024 4TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND INTELLIGENT SYSTEMS ENGINEERING, MLISE 2024, 2024, : 250 - 254
  • [26] Multi-Agent Deep Reinforcement Learning for Task Offloading in UAV-Assisted Mobile Edge Computing
    Zhao, Nan
    Ye, Zhiyang
    Pei, Yiyang
    Liang, Ying-Chang
    Niyato, Dusit
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (09) : 6949 - 6960
  • [27] Mobile-Aware Online Task Offloading Based on Deep Reinforcement Learning in Mobile Edge Computing Networks
    Li, Yuting
    Liu, Yitong
    Liu, Xingcheng
    Tu, Qiang
    Xie, Yi
    2023 IEEE 34TH ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS, PIMRC, 2023,
  • [28] Deep reinforcement learning-based dynamical task offloading for mobile edge computing
    Xie, Bo
    Cui, Haixia
    JOURNAL OF SUPERCOMPUTING, 2025, 81 (01)
  • [29] ARLO: An asynchronous update reinforcement learning-based offloading algorithm for mobile edge computing
    Liu, Zhibin
    Liu, Yuhan
    Lei, Yuxia
    Zhou, Zhenyou
    Wang, Xinshui
    PEER-TO-PEER NETWORKING AND APPLICATIONS, 2023, 16 (03) : 1468 - 1480
  • [30] ARLO: An asynchronous update reinforcement learning-based offloading algorithm for mobile edge computing
    Zhibin Liu
    Yuhan Liu
    Yuxia Lei
    Zhenyou Zhou
    Xinshui Wang
    Peer-to-Peer Networking and Applications, 2023, 16 : 1468 - 1480