RAVEN: Resource Allocation Using Reinforcement Learning for Vehicular Edge Computing Networks

被引:3
作者
Zhang, Yanhao [1 ]
Abhishek, Nalam Venkata [2 ]
Gurusamy, Mohan [1 ]
机构
[1] Natl Univ Singapore, Dept Elect & Comp Engn, Singapore 569830, Singapore
[2] Singapore Inst Technol, Infocomm Technol Cluster, Singapore 567739, Singapore
关键词
Servers; Switches; Resource management; Task analysis; Markov processes; Reinforcement learning; Delays; Resource allocation; Markov decision process; reinforcement learning; vehicular edge computing;
D O I
10.1109/LCOMM.2022.3196711
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Vehicular Edge Computing (VEC) enables vehicles to offload tasks to the road side units (RSUs) to improve the task performance and user experience. However, blindly offloading the vehicle's tasks might not be an efficient solution. Such a scheme may overload the resources available at the RSU, increase the number of requests rejected, and decrease the system utility by engaging more servers than required. This letter proposes a Markov Decision Process based Reinforcement Learning (RL) method to allocate resources at the RSU. The RL algorithm aims to train the RSU in optimizing its resource allocation by varying the resource allocation scheme according to the total task demands generated by the traffic. The results demonstrate the effectiveness of the proposed method.
引用
收藏
页码:2636 / 2640
页数:5
相关论文
共 12 条
[2]   Resource Allocation for Low-Latency NOMA-V2X Networks Using Reinforcement Learning [J].
Ding, Huiyi ;
Leung, Ka-Cheong .
IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (IEEE INFOCOM WKSHPS 2021), 2021,
[3]  
Efroni Y, 2018, PR MACH LEARN RES, V80
[4]   Toward Intelligent Vehicular Networks: A Machine Learning Framework [J].
Liang, Le ;
Ye, Hao ;
Li, Geoffrey Ye .
IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (01) :124-135
[5]   Resource Allocation in Vehicular Cloud Computing Systems With Heterogeneous Vehicles and Roadside Units [J].
Lin, Chun-Cheng ;
Deng, Der-Jiunn ;
Yao, Chia-Chi .
IEEE INTERNET OF THINGS JOURNAL, 2018, 5 (05) :3692-3700
[6]  
Liu JM, 2019, PROCEEDINGS OF 2019 IEEE 3RD INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2019), P1114, DOI [10.1109/itnec.2019.8729331, 10.1109/ITNEC.2019.8729331]
[7]   Characterizing Urban Vehicle-to-Vehicle Communications for Reliable Safety Applications [J].
Lyu, Feng ;
Zhu, Hongzi ;
Cheng, Nan ;
Zhou, Haibo ;
Xu, Wenchao ;
Li, Minglu ;
Shen, Xuemin .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2020, 21 (06) :2586-2602
[8]   Deep Reinforcement Learning Based Resource Management for Multi-Access Edge Computing in Vehicular Networks [J].
Peng, Haixia ;
Shen, Xuemin .
IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2020, 7 (04) :2416-2428
[9]   Adaptive Task Offloading in Vehicular Edge Computing Networks Based on Deep Reinforcement Learning [J].
Shuai, Renhao ;
Wang, Leiyu ;
Guo, Shuaishuai ;
Zhang, Haixia .
2021 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA, ICCC, 2021, :260-265
[10]   REINFORCEMENT LEARNING FOR RESOURCE PROVISIONING IN THE VEHICULAR CLOUD [J].
Salahuddin, Mohammad A. ;
Al-Fuqaha, Ala ;
Guizani, Mohsen .
IEEE WIRELESS COMMUNICATIONS, 2016, 23 (04) :128-135