Deep Reinforcement Learning for Shared Offloading Strategy in Vehicle Edge Computing

被引:29
作者
Peng, Xin [1 ]
Han, Zhengke [1 ]
Xie, Wenwu [1 ]
Yu, Chao [1 ]
Zhu, Peng [1 ]
Xiao, Jian [1 ]
Yang, Jinxia [1 ]
机构
[1] Hunan Inst Sci & Technol, Sch Informat Sci & Engn, Yueyang 414015, Peoples R China
来源
IEEE SYSTEMS JOURNAL | 2023年 / 17卷 / 02期
关键词
Task analysis; Servers; Optimization; Edge computing; Computational modeling; Reinforcement learning; Delays; Deep reinforcement learning (DRL); Internet of Vehicles (IoVs); task shared offloading; vehicular edge computing (VEC); RESOURCE-ALLOCATION; VEHICULAR NETWORKS; SCHEME;
D O I
10.1109/JSYST.2022.3190926
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Vehicular edge computing (VEC) effectively reduces the computing load of vehicles by offloading computing tasks from vehicle terminals to edge servers. However, offloading of tasks increase in quantity the transmission time and energy of the network. In order to reduce the computing load of edge servers and improve the system response, a shared offloading strategy based on deep reinforcement learning is proposed for the complex computing environment of Internet of Vehicles (IoVs). The shared offloading strategy exploits the commonality of vehicles task requests, similar computing tasks coming from different vehicles can share the computing results of former task submitted. The shared offloading strategy can be adapted to the complex scenarios of the IoVs. Each vehicle can share the offloading conditions of the VEC servers, and then adaptively select three computing modes: local execution, task offloading, and shared offloading. In this article, the network state and offloading strategy space are the input of the deep reinforcement learning (DRL). Through the DRL, each task unit selects the offloading strategy with the optimal energy consumption at each time period in the dynamic IoVs transmission and computing environment. Compared with the existing proposals and DRL-based algorithms, it can effectively reduce the delay and energy consumption required for tasks offloading.
引用
收藏
页码:2089 / 2100
页数:12
相关论文
共 35 条
[31]   RAISE: An efficient RSU-aided message authentication scheme in vehicular communication networks [J].
Zhang, Chenxi ;
Lin, Xiaodong ;
Lu, Rongxing ;
Ho, Pin-Han .
2008 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, PROCEEDINGS, VOLS 1-13, 2008, :1451-1457
[32]   Deep Learning Empowered Task Offloading for Mobile Edge Computing in Urban Informatics [J].
Zhang, Ke ;
Zhu, Yongxu ;
Leng, Supeng ;
He, Yejun ;
Maharjan, Sabita ;
Zhang, Yan .
IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (05) :7635-7647
[33]  
Zhang K, 2017, IEEE VEH TECHNOL MAG, V12, P36, DOI 10.1109/MVT.2017.2668838
[34]   Energy-Efficient Offloading for Mobile Edge Computing in 5G Heterogeneous Networks [J].
Zhang, Ke ;
Mao, Yuming ;
Leng, Supeng ;
Zhao, Quanxin ;
Li, Longjiang ;
Peng, Xin ;
Pan, Li ;
Maharjan, Sabita ;
Zhang, Yan .
IEEE ACCESS, 2016, 4 :5896-5907
[35]   Deep Reinforcement Learning Based Computing Offloading Decision and Task Scheduling in Internet of Vehicles [J].
Yu, Zhuyue ;
Tang, Yuliang ;
Zhang, Lintao ;
Zeng, Hao .
2021 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA, ICCC, 2021, :1166-1171