Deep Reinforcement Learning-Based URLLC-Aware Task Offloading in Collaborative Vehicular Networks

被引:0
作者
Pan, Chao [1 ,2 ]
Wang, Zhao [1 ,2 ]
Zhou, Zhenyu [1 ,2 ]
Ren, Xincheng [2 ]
机构
[1] North China Elect Power Univ, Hebei Key Lab Power Internet Things Technol, Beijing 102206, Peoples R China
[2] Yanan Univ, Shaanxi Key Lab Intelligent Proc Big Energy Data, Yanan 716000, Peoples R China
基金
中国国家自然科学基金;
关键词
collaborative vehicular networks; task offloading; URLLC awareness; deep Q-learning; RESOURCE-ALLOCATION; EDGE; MANAGEMENT;
D O I
暂无
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Collaborative vehicular networks is a key enabler to meet the stringent ultra-reliable and low-latency communications (URLLC) requirements. A user vehicle (UV) dynamically optimizes task offloading by exploiting its collaborations with edge servers and vehicular fog servers (VFSs). However, the optimization of task offloading in highly dynamic collaborative vehicular networks faces several challenges such as URLLC guaranteeing, incomplete information, and dimensionality curse. In this paper, we first characterize URLLC in terms of queuing delay bound violation and high-order statistics of excess backlogs. Then, a Deep Reinforcement lEarning-based URLLC-Aware task offloading algorithM named DREAM is proposed to maximize the throughput of the UVs while satisfying the URLLC constraints in a best-effort way. Compared with existing task offloading algorithms, DREAM achieves superior performance in throughput, queuing delay, and URLLC.
引用
收藏
页码:134 / 146
页数:13
相关论文
共 29 条
[1]   Joint Rate Control and Power Allocation for Non-Orthogonal Multiple Access Systems [J].
Bao, Wei ;
Chen, He ;
Li, Yonghui ;
Vucetic, Branka .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2017, 35 (12) :2798-2811
[2]   Ultrareliable and Low-Latency Wireless Communication: Tail, Risk, and Scale [J].
Bennis, Mehdi ;
Debbah, Merouane ;
Poor, H. Vincent .
PROCEEDINGS OF THE IEEE, 2018, 106 (10) :1834-1853
[3]   Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning [J].
Chen, Xianfu ;
Zhang, Honggang ;
Wu, Celimuge ;
Mao, Shiwen ;
Ji, Yusheng ;
Bennis, Mehdi .
IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (03) :4005-4018
[4]  
Coles G.S., 2001, An introduction to statistical modeling of extreme values, DOI 10.1007/978-1-4471-3675-0
[5]   INTELLIGENT TASK OFFLOADING IN VEHICULAR EDGE COMPUTING NETWORKS [J].
Guo, Hongzhi ;
Liu, Jiajia ;
Ren, Ju ;
Zhang, Yanning .
IEEE WIRELESS COMMUNICATIONS, 2020, 27 (04) :126-132
[6]   Vehicular Fog Computing: A Viewpoint of Vehicles as the Infrastructures [J].
Hou, Xueshi ;
Li, Yong ;
Chen, Min ;
Wu, Di ;
Jin, Depeng ;
Chen, Sheng .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2016, 65 (06) :3860-3873
[7]   Joint Offloading and Trajectory Design for UAV-Enabled Mobile Edge Computing Systems [J].
Hu, Qiyu ;
Cai, Yunlong ;
Yu, Guanding ;
Qin, Zhijin ;
Zhao, Minjian ;
Li, Geoffrey Ye .
IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (02) :1879-1892
[8]   Hierarchical Optimal Synchronization for Linear Systems via Reinforcement Learning: A Stackelberg-Nash Game Perspective [J].
Li, Man ;
Qin, Jiahu ;
Ma, Qichao ;
Zheng, Wei Xing ;
Kang, Yu .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (04) :1600-1611
[9]  
Li X., 2020, IEEE T VEH TECHNOL, V69, p12 286
[10]   Cooperative Wireless-Powered NOMA Relaying for B5G IoT Networks With Hardware Impairments and Channel Estimation Errors [J].
Li, Xingwang ;
Wang, Qunshu ;
Liu, Meng ;
Li, Jingjing ;
Peng, Hongxing ;
Piran, Md Jalil ;
Li, Lihua .
IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (07) :5453-5467