Adaptive Task Offloading in Vehicular Edge Computing Networks: a Reinforcement Learning Based Scheme

被引:26
|
作者
Zhang, Jie [1 ]
Guo, Hongzhi [2 ]
Liu, Jiajia [2 ]
机构
[1] Xidian Univ, Sch Cyber Engn, Xian 710071, Shaanxi, Peoples R China
[2] Northwestern Polytech Univ, Sch Cybersecur, Xian 710072, Shaanxi, Peoples R China
来源
MOBILE NETWORKS & APPLICATIONS | 2020年 / 25卷 / 05期
基金
中国国家自然科学基金;
关键词
Vehicular networks; Mobile edge computing; Reinforcement learning; RESOURCE-ALLOCATION;
D O I
10.1007/s11036-020-01584-6
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, with the rapid development of Internet of Things (IoTs) and artificial intelligence, vehicular networks have transformed from simple interactive systems to smart integrated networks. The accompanying intelligent connected vehicles (ICVs) can communicate with each other and connect to the urban traffic information network, to support intelligent applications, i.e., autonomous driving, intelligent navigation, and in-vehicle entertainment services. These applications are usually delay-sensitive and compute-intensive, with the result that the computation resources of vehicles cannot meet the quality requirements of service for vehicles. To solve this problem, vehicular edge computing networks (VECNs) that utilize mobile edge computing offloading technology are seen as a promising paradigm. However, existing task offloading schemes lack consideration of the highly dynamic feature of vehicular networks, which makes them unable to give time-varying offloading decisions for dynamic changes in vehicular networks. Meanwhile, the current mobility model cannot truly reflect the actual road traffic situation. Toward this end, we study the task offloading problem in VECNs with the synchronized random walk model. Then, we propose a reinforcement learning-based scheme as our solution, and verify its superior performance in processing delay reduction and dynamic scene adaptability.
引用
收藏
页码:1736 / 1745
页数:10
相关论文
共 50 条
  • [31] Deep Reinforcement Learning for Task Offloading in Edge Computing
    Xie, Bo
    Cui, Haixia
    2024 4TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND INTELLIGENT SYSTEMS ENGINEERING, MLISE 2024, 2024, : 250 - 254
  • [32] Trusted and Efficient Task Offloading in Vehicular Edge Computing Networks
    Guo, Hongzhi
    Chen, Xiangshen
    Zhou, Xiaoyi
    Liu, Jiajia
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2024, 10 (06) : 2370 - 2382
  • [33] Efficient Task Offloading for Mobile Edge Computing in Vehicular Networks
    Han, Xiao
    Wang, Huiqiang
    Yang, Guoliang
    Wang, Chengbo
    INTERNATIONAL JOURNAL OF DIGITAL CRIME AND FORENSICS, 2024, 16 (01)
  • [34] Efficient and Trusted Task Offloading in Vehicular Edge Computing Networks
    Chen, Xiangshen
    Guo, Hongzhi
    Liu, Jiajia
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 5201 - 5206
  • [35] Deep reinforcement learning based offloading decision algorithm for vehicular edge computing
    Hu, Xi
    Huang, Yang
    PEERJ COMPUTER SCIENCE, 2022, 8
  • [36] Stackelberg game-based task offloading in vehicular edge computing networks
    Liu, Shuang
    Tian, Jie
    Deng, Xiaofang
    Zhi, Yuan
    Bian, Ji
    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, 2021, 34 (16)
  • [37] Deep reinforcement learning based offloading decision algorithm for vehicular edge computing
    Hu, Xi
    Huang, Yang
    PEERJ, 2022, 10
  • [38] Deep Reinforcement Learning-Based Computation Offloading in Vehicular Edge Computing
    Zhan, Wenhan
    Luo, Chunbo
    Wang, Jin
    Min, Geyong
    Duan, Hancong
    2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [39] Dependent Task Offloading for Edge Computing based on Deep Reinforcement Learning
    Wang, Jin
    Hu, Jia
    Min, Geyong
    Zhan, Wenhan
    Zomaya, Albert Y.
    Georgalas, Nektarios
    IEEE TRANSACTIONS ON COMPUTERS, 2022, 71 (10) : 2449 - 2461
  • [40] Deep reinforcement learning based offloading decision algorithm for vehicular edge computing
    Hu, Xi
    Huang, Yang
    PEERJ COMPUTER SCIENCE, 2022, 8