A bandwidth-fair migration-enabled task offloading for vehicular edge computing: a deep reinforcement learning approach

被引:0
|
作者
Tang, Chaogang [1 ]
Li, Zhao [1 ]
Xiao, Shuo [1 ]
Wu, Huaming [2 ]
Chen, Wei [1 ]
机构
[1] China Univ Min & Technol, Sch Comp Sci & Technol, Xuzhou 221116, Jiangsu, Peoples R China
[2] Tianjin Univ, Ctr Appl Math, Tianjin 300072, Peoples R China
基金
中国国家自然科学基金;
关键词
Vehicular edge computing; Bandwidth fairness; Task offloading; Task migration; Deep reinforcement learning; OPTIMIZATION;
D O I
10.1007/s42486-024-00156-x
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Vehicular edge computing (VEC), which extends the computing, storage, and networking resources from the cloud center to the logical network edge through the deployment of edge servers at the road-side unit (RSU), has aroused extensive attention in recent years, by virtue of the advantages in meeting the stringent latency requirements of vehicular applications. VEC enables the tasks and data to be processed and analyzed in close proximity to data sources (i.e., vehicles). VEC reduces the response latency for vehicular tasks, but also mitigates the burdens over the backhaul networks. However, how to achieve cost-effective task offloading in VEC remains a challenging problem, owing to the fact that the computing capabilities of the edge server are not sufficient enough compared to the cloud center and the uneven distribution of computing resources among RSUs. In this paper, we consider an urban VEC scenario and model the VEC system in terms of delay and cost. The goal of this paper is to minimize the weighted total latency and vehicle cost by balancing the bandwidth and migrating tasks while satisfying multiple constraint conditions. Specifically, we model the task offloading problem as a weighted bipartite graph matching problem and propose a Kuhn-Munkres (KM) based Task Matching Offloading scheme (KTMO) to determine the optimal offloading strategy. Furthermore, considering the dynamic time-varying features of the VEC environment, we model the task migration problem as a Markov Decision Process (MDP) and propose a Deep Reinforcement Learning (DRL) based online learning method to explore optimal migration decisions. The experimental results demonstrate that our strategy has better performance compared to other methods.
引用
收藏
页码:255 / 270
页数:16
相关论文
共 50 条
  • [1] Deep reinforcement learning approach for multi-hop task offloading in vehicular edge computing
    Ahmed, Manzoor
    Raza, Salman
    Ahmad, Haseeb
    Khan, Wali Ullah
    Xu, Fang
    Rabie, Khaled
    ENGINEERING SCIENCE AND TECHNOLOGY-AN INTERNATIONAL JOURNAL-JESTECH, 2024, 59
  • [2] Task Offloading With Service Migration for Satellite Edge Computing: A Deep Reinforcement Learning Approach
    Wu, Haonan
    Yang, Xiumei
    Bu, Zhiyong
    IEEE ACCESS, 2024, 12 : 25844 - 25856
  • [3] Task offloading in vehicular edge computing networks via deep reinforcement learning
    Karimi, Elham
    Chen, Yuanzhu
    Akbari, Behzad
    COMPUTER COMMUNICATIONS, 2022, 189 : 193 - 204
  • [4] Prioritized Task Offloading in Vehicular Edge Computing Using Deep Reinforcement Learning
    Uddin, Ashab
    Sakr, Ahmed Hamdi
    Zhang, Ning
    2024 IEEE 99TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2024-SPRING, 2024,
  • [5] Task offloading for vehicular edge computing with imperfect CSI: A deep reinforcement approach
    Wu, Yuxin
    Xia, Junjuan
    Gao, Chongzhi
    Ou, Jiangtao
    Fan, Chengyuan
    Ou, Jianghong
    Fan, Dahua
    PHYSICAL COMMUNICATION, 2022, 55
  • [6] Online Learning Enabled Task Offloading for Vehicular Edge Computing
    Zhang, Rui
    Cheng, Peng
    Chen, Zhuo
    Liu, Sige
    Li, Yonghui
    Vucetic, Branka
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2020, 9 (07) : 928 - 932
  • [7] Deep Reinforcement Learning-Guided Task Reverse Offloading in Vehicular Edge Computing
    Gu, Anqi
    Wu, Huaming
    Tang, Huijun
    Tang, Chaogang
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 2200 - 2205
  • [8] Adaptive Prioritization and Task Offloading in Vehicular Edge Computing Through Deep Reinforcement Learning
    Uddin, Ashab
    Sakr, Ahmed Hamdi
    Zhang, Ning
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2025, 74 (03) : 5038 - 5052
  • [9] Deep Reinforcement Learning for Task Offloading in Edge Computing
    Xie, Bo
    Cui, Haixia
    2024 4TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND INTELLIGENT SYSTEMS ENGINEERING, MLISE 2024, 2024, : 250 - 254
  • [10] Joint Task Offloading and Service Migration in RIS assisted Vehicular Edge Computing Network Based on Deep Reinforcement Learning
    Ning, Xiangrui
    Zeng, Ming
    Fei, Zesong
    2024 INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKING AND COMMUNICATIONS, ICNC, 2024, : 1037 - 1042