Deep Reinforcement Learning for Shared Offloading Strategy in Vehicle Edge Computing

被引:29
作者
Peng, Xin [1 ]
Han, Zhengke [1 ]
Xie, Wenwu [1 ]
Yu, Chao [1 ]
Zhu, Peng [1 ]
Xiao, Jian [1 ]
Yang, Jinxia [1 ]
机构
[1] Hunan Inst Sci & Technol, Sch Informat Sci & Engn, Yueyang 414015, Peoples R China
来源
IEEE SYSTEMS JOURNAL | 2023年 / 17卷 / 02期
关键词
Task analysis; Servers; Optimization; Edge computing; Computational modeling; Reinforcement learning; Delays; Deep reinforcement learning (DRL); Internet of Vehicles (IoVs); task shared offloading; vehicular edge computing (VEC); RESOURCE-ALLOCATION; VEHICULAR NETWORKS; SCHEME;
D O I
10.1109/JSYST.2022.3190926
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Vehicular edge computing (VEC) effectively reduces the computing load of vehicles by offloading computing tasks from vehicle terminals to edge servers. However, offloading of tasks increase in quantity the transmission time and energy of the network. In order to reduce the computing load of edge servers and improve the system response, a shared offloading strategy based on deep reinforcement learning is proposed for the complex computing environment of Internet of Vehicles (IoVs). The shared offloading strategy exploits the commonality of vehicles task requests, similar computing tasks coming from different vehicles can share the computing results of former task submitted. The shared offloading strategy can be adapted to the complex scenarios of the IoVs. Each vehicle can share the offloading conditions of the VEC servers, and then adaptively select three computing modes: local execution, task offloading, and shared offloading. In this article, the network state and offloading strategy space are the input of the deep reinforcement learning (DRL). Through the DRL, each task unit selects the offloading strategy with the optimal energy consumption at each time period in the dynamic IoVs transmission and computing environment. Compared with the existing proposals and DRL-based algorithms, it can effectively reduce the delay and energy consumption required for tasks offloading.
引用
收藏
页码:2089 / 2100
页数:12
相关论文
共 35 条
[1]   Task Offloading and Resource Allocation for Mobile Edge Computing by Deep Reinforcement Learning Based on SARSA [J].
Alfakih, Taha ;
Hassan, Mohammad Mehedi ;
Gumaei, Abdu ;
Savaglio, Claudio ;
Fortino, Giancarlo .
IEEE ACCESS, 2020, 8 :54074-54084
[2]   Intelligent resource allocation management for vehicles network: An A3C learning approach [J].
Chen, Miaojiang ;
Wang, Tian ;
Ota, Kaoru ;
Dong, Mianxiong ;
Zhao, Ming ;
Liu, Anfeng .
COMPUTER COMMUNICATIONS, 2020, 151 :485-494
[3]   COMPUTATION OFFLOADING IN BEYOND 5G NETWORKS: A DISTRIBUTED LEARNING FRAMEWORK AND APPLICATIONS [J].
Chen, Xianfu ;
Wu, Celimuge ;
Liu, Zhi ;
Zhang, Ning ;
Ji, Yusheng .
IEEE WIRELESS COMMUNICATIONS, 2021, 28 (02) :56-62
[4]   Decentralized computation offloading for multi-user mobile edge computing: a deep reinforcement learning approach [J].
Chen, Zhao ;
Wang, Xiaodong .
EURASIP JOURNAL ON WIRELESS COMMUNICATIONS AND NETWORKING, 2020, 2020 (01)
[5]   Double DQN Based Computing Offloading Scheme for Fog Radio Access Networks [J].
Jiang, Fan ;
Zhu, Xiaolin ;
Sun, Changyin .
2021 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA, ICCC, 2021, :1131-1136
[6]   Heterogeneous Edge Offloading With Incomplete Information: A Minority Game Approach [J].
Hu, Miao ;
Xie, Zixuan ;
Wu, Di ;
Zhou, Yipeng ;
Chen, Xu ;
Xiao, Liang .
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2020, 31 (09) :2139-2154
[7]   Multi-Agent Deep Reinforcement Learning for Computation Offloading and Interference Coordination in Small Cell Networks [J].
Huang, Xiaoyan ;
Leng, Supeng ;
Maharjan, Sabita ;
Zhang, Yan .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2021, 70 (09) :9282-9293
[8]   Intelligent Resource Allocation for Video Analytics in Blockchain-Enabled Internet of Autonomous Vehicles With Edge Computing [J].
Jiang, Xiantao ;
Yu, F. Richard ;
Song, Tian ;
Leung, Victor C. M. .
IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (16) :14260-14272
[9]   Mobility-Aware Edge Caching and Computing in Vehicle Networks: A Deep Reinforcement Learning [J].
Le Thanh Tan ;
Hu, Rose Qingyang .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2018, 67 (11) :10190-10203
[10]   User Association for Load Balancing in Vehicular Networks: An Online Reinforcement Learning Approach [J].
Li, Zhong ;
Wang, Cheng ;
Jiang, Chang-Jun .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2017, 18 (08) :2217-2228