RAVEN: Resource Allocation Using Reinforcement Learning for Vehicular Edge Computing Networks

被引:2
|
作者
Zhang, Yanhao [1 ]
Abhishek, Nalam Venkata [2 ]
Gurusamy, Mohan [1 ]
机构
[1] Natl Univ Singapore, Dept Elect & Comp Engn, Singapore 569830, Singapore
[2] Singapore Inst Technol, Infocomm Technol Cluster, Singapore 567739, Singapore
关键词
Servers; Switches; Resource management; Task analysis; Markov processes; Reinforcement learning; Delays; Resource allocation; Markov decision process; reinforcement learning; vehicular edge computing;
D O I
10.1109/LCOMM.2022.3196711
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Vehicular Edge Computing (VEC) enables vehicles to offload tasks to the road side units (RSUs) to improve the task performance and user experience. However, blindly offloading the vehicle's tasks might not be an efficient solution. Such a scheme may overload the resources available at the RSU, increase the number of requests rejected, and decrease the system utility by engaging more servers than required. This letter proposes a Markov Decision Process based Reinforcement Learning (RL) method to allocate resources at the RSU. The RL algorithm aims to train the RSU in optimizing its resource allocation by varying the resource allocation scheme according to the total task demands generated by the traffic. The results demonstrate the effectiveness of the proposed method.
引用
收藏
页码:2636 / 2640
页数:5
相关论文
共 50 条
  • [41] Enhanced resource allocation in mobile edge computing using reinforcement learning based MOACO algorithm for IIOT
    Vimal, S.
    Khari, Manju
    Dey, Nilanjan
    Gonzalez Crespo, Ruben
    Robinson, Y. Harold
    COMPUTER COMMUNICATIONS, 2020, 151 : 355 - 364
  • [42] Resource Allocation Based on Deep Reinforcement Learning in IoT Edge Computing
    Xiong, Xiong
    Zheng, Kan
    Lei, Lei
    Hou, Lu
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2020, 38 (06) : 1133 - 1146
  • [43] Joint Service Caching and Computation Offloading Scheme Based on Deep Reinforcement Learning in Vehicular Edge Computing Systems
    Xue, Zheng
    Liu, Chang
    Liao, Canliang
    Han, Guojun
    Sheng, Zhengguo
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (05) : 6709 - 6722
  • [44] Offloading and Resource Allocation With General Task Graph in Mobile Edge Computing: A Deep Reinforcement Learning Approach
    Yan, Jia
    Bi, Suzhi
    Zhang, Ying-Jun Angela
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2020, 19 (08) : 5404 - 5419
  • [45] Optimizing Task Offloading and Resource Allocation in Vehicular Edge Computing Based on Heterogeneous Cellular Networks
    Fan, Xinggang
    Gu, Wenting
    Long, Changqing
    Gu, Chaojie
    He, Shibo
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (05) : 7175 - 7187
  • [46] Deep Reinforcement Learning-Based Adaptive Computation Offloading and Power Allocation in Vehicular Edge Computing Networks
    Qiu, Bin
    Wang, Yunxiao
    Xiao, Hailin
    Zhang, Zhongshan
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (10) : 13339 - 13349
  • [47] Caching and Computing Resource Allocation in Cooperative Heterogeneous 5G Edge Networks Using Deep Reinforcement Learning
    Bose, Tushar
    Chatur, Nilesh
    Baberwal, Sonil
    Adhya, Aneek
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2024, 21 (04): : 4161 - 4178
  • [48] Deep Reinforcement Learning for Collaborative Edge Computing in Vehicular Networks
    Li, Mushu
    Gao, Jie
    Zhao, Lian
    Shen, Xuemin
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2020, 6 (04) : 1122 - 1135
  • [49] Multi-agent deep reinforcement learning-based partial offloading and resource allocation in vehicular edge computing networks
    Xue, Jianbin
    Wang, Luyao
    Yu, Qingda
    Mao, Peipei
    COMPUTER COMMUNICATIONS, 2025, 234
  • [50] A Double Auction Mechanism for Resource Allocation in Coded Vehicular Edge Computing
    Ng, Jer Shyuan
    Lim, W. Lim Bryan
    Xiong, Zehui
    Niyato, Dusit
    Leung, Cyril
    Miao, Chunyan
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2022, 71 (02) : 1832 - 1845