RAVEN: Resource Allocation Using Reinforcement Learning for Vehicular Edge Computing Networks

被引:3
作者
Zhang, Yanhao [1 ]
Abhishek, Nalam Venkata [2 ]
Gurusamy, Mohan [1 ]
机构
[1] Natl Univ Singapore, Dept Elect & Comp Engn, Singapore 569830, Singapore
[2] Singapore Inst Technol, Infocomm Technol Cluster, Singapore 567739, Singapore
关键词
Servers; Switches; Resource management; Task analysis; Markov processes; Reinforcement learning; Delays; Resource allocation; Markov decision process; reinforcement learning; vehicular edge computing;
D O I
10.1109/LCOMM.2022.3196711
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Vehicular Edge Computing (VEC) enables vehicles to offload tasks to the road side units (RSUs) to improve the task performance and user experience. However, blindly offloading the vehicle's tasks might not be an efficient solution. Such a scheme may overload the resources available at the RSU, increase the number of requests rejected, and decrease the system utility by engaging more servers than required. This letter proposes a Markov Decision Process based Reinforcement Learning (RL) method to allocate resources at the RSU. The RL algorithm aims to train the RSU in optimizing its resource allocation by varying the resource allocation scheme according to the total task demands generated by the traffic. The results demonstrate the effectiveness of the proposed method.
引用
收藏
页码:2636 / 2640
页数:5
相关论文
共 12 条
[11]   5G Vehicular Network Resource Management for Improving Radio Access Through Machine Learning [J].
Tayyaba, Sahrish Khan ;
Khattak, Hasan Ali ;
Almogren, Ahmad ;
Shah, Munam Ali ;
Din, Ikram Ud ;
Alkhalifa, Ibrahim ;
Guizani, Mohsen .
IEEE ACCESS, 2020, 8 :6792-6800
[12]   Dynamic Radio Resource Slicing for a Two-Tier Heterogeneous Wireless Network [J].
Ye, Qiang ;
Zhuang, Weihua ;
Zhang, Shan ;
Jin, A-Long ;
Shen, Xuemin ;
Li, Xu .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2018, 67 (10) :9896-9910