Task offloading strategy and scheduling optimization for internet of vehicles based on deep reinforcement learning

被引:10
作者
Zhao, Xu [1 ]
Liu, Mingzhen [2 ]
Li, Maozhen [3 ]
机构
[1] Xian Polytech Univ, Sch Elect & Informat, Xian 710048, Peoples R China
[2] Xian Polytech Univ, Sch Comp Sci, Xian 710048, Peoples R China
[3] Brunel Univ London, Dept Elect & Elect Engn, Uxbridge UB8 3PH, England
基金
中国国家自然科学基金;
关键词
Deep reinforcement learning; Internet of vehicles; Mobile edge computing; Scheduling optimization;
D O I
10.1016/j.adhoc.2023.103193
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Driven by the construction of smart cities, networks and communication technologies are gradually infiltrating into the Internet of Things (IoT) applications in urban infrastructure, such as automatic driving. In the Internet of Vehicles (IoV) environment, intelligent vehicles will generate a lot of data. However, the limited computing power of in-vehicle terminals cannot meet the demand. To solve this problem, we first simulate the task offloading model of vehicle terminal in Mobile Edge Computing (MEC) environment. Secondly, according to the model, we design and implement a MEC server collaboration scheme considering both delay and energy consumption. Thirdly, based on the optimization theory, the system optimization solution is formulated with the goal of minimizing system cost. Because the problem to be resolved is a mixed binary nonlinear programming problem, we model the problem as a Markov Decision Process (MDP). The original resource allocation decision is turned into a Reinforcement Learning (RL) problem. In order to achieve the optimal solution, the Deep Reinforcement Learning (DRL) method is used. Finally, we propose a Deep Deterministic Policy Gradient (DDPG) algorithm to deal with task offloading and scheduling optimization in high-dimensional continuous action space, and the experience replay mechanism is used to accelerate the convergence and enhance the stability of the network. The simulation results show that our scheme has good performance optimization in terms of convergence, system delay, average task energy consumption and system cost. For example, compared with the comparison algorithm, the system cost performance has improved by 9.12% under different task sizes, which indicates that our scheme is more suitable for highly dynamic Internet of Vehicles environment.
引用
收藏
页数:13
相关论文
共 40 条
  • [21] Energy-Efficient Cooperative Communication and Computation for Wireless Powered Mobile-Edge Computing
    Mao, Sun
    Wu, Jinsong
    Liu, Lei
    Lan, Dapeng
    Taherkordi, Amir
    [J]. IEEE SYSTEMS JOURNAL, 2022, 16 (01): : 287 - 298
  • [22] Human-level control through deep reinforcement learning
    Mnih, Volodymyr
    Kavukcuoglu, Koray
    Silver, David
    Rusu, Andrei A.
    Veness, Joel
    Bellemare, Marc G.
    Graves, Alex
    Riedmiller, Martin
    Fidjeland, Andreas K.
    Ostrovski, Georg
    Petersen, Stig
    Beattie, Charles
    Sadik, Amir
    Antonoglou, Ioannis
    King, Helen
    Kumaran, Dharshan
    Wierstra, Daan
    Legg, Shane
    Hassabis, Demis
    [J]. NATURE, 2015, 518 (7540) : 529 - 533
  • [23] Knowledge-Driven Service Offloading Decision for Vehicular Edge Computing: A Deep Reinforcement Learning Approach
    Qi, Qi
    Wang, Jingyu
    Ma, Zhanyu
    Sun, Haifeng
    Cao, Yufei
    Zhang, Lingxin
    Liao, Jianxin
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (05) : 4192 - 4203
  • [24] Towards sustainable smart cities: A review of trends, architectures, components, and open challenges in smart cities
    Silva, Bhagya Nathali
    Khan, Murad
    Han, Kijun
    [J]. SUSTAINABLE CITIES AND SOCIETY, 2018, 38 : 697 - 713
  • [25] Somesula Manoj Kumar, P 8 INT C TRANSP OPT
  • [26] Adaptive Learning-Based Task Offloading for Vehicular Edge Computing Systems
    Sun, Yuxuan
    Guo, Xueying
    Song, Jinhui
    Zhou, Sheng
    Jiang, Zhiyuan
    Liu, Xin
    Niu, Zhisheng
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (04) : 3061 - 3074
  • [27] Deep Reinforcement Learning for Task Offloading in Mobile Edge Computing Systems
    Tang, Ming
    Wong, Vincent W. S.
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2022, 21 (06) : 1985 - 1997
  • [28] A dynamic task offloading algorithm based on greedy matching in vehicle network
    Tian, Shujuan
    Deng, Xianghong
    Chen, Pengpeng
    Pei, Tingrui
    Oh, Sangyoon
    Xue, Weiping
    [J]. AD HOC NETWORKS, 2021, 123
  • [29] Advances in smart roads for future smart cities
    Toh, Chai K.
    Sanguesa, Julio A.
    Cano, Juan C.
    Martinez, Francisco J.
    [J]. PROCEEDINGS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES, 2020, 476 (2233):
  • [30] Waheed A., 2022, IEEE ACCESS, V99, P1