RAVEN: Resource Allocation Using Reinforcement Learning for Vehicular Edge Computing Networks

被引:2
|
作者
Zhang, Yanhao [1 ]
Abhishek, Nalam Venkata [2 ]
Gurusamy, Mohan [1 ]
机构
[1] Natl Univ Singapore, Dept Elect & Comp Engn, Singapore 569830, Singapore
[2] Singapore Inst Technol, Infocomm Technol Cluster, Singapore 567739, Singapore
关键词
Servers; Switches; Resource management; Task analysis; Markov processes; Reinforcement learning; Delays; Resource allocation; Markov decision process; reinforcement learning; vehicular edge computing;
D O I
10.1109/LCOMM.2022.3196711
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Vehicular Edge Computing (VEC) enables vehicles to offload tasks to the road side units (RSUs) to improve the task performance and user experience. However, blindly offloading the vehicle's tasks might not be an efficient solution. Such a scheme may overload the resources available at the RSU, increase the number of requests rejected, and decrease the system utility by engaging more servers than required. This letter proposes a Markov Decision Process based Reinforcement Learning (RL) method to allocate resources at the RSU. The RL algorithm aims to train the RSU in optimizing its resource allocation by varying the resource allocation scheme according to the total task demands generated by the traffic. The results demonstrate the effectiveness of the proposed method.
引用
收藏
页码:2636 / 2640
页数:5
相关论文
共 50 条
  • [31] Joint Secure Offloading and Resource Allocation for Vehicular Edge Computing Network: A Multi-Agent Deep Reinforcement Learning Approach
    Ju, Ying
    Chen, Yuchao
    Cao, Zhiwei
    Liu, Lei
    Pei, Qingqi
    Xiao, Ming
    Ota, Kaoru
    Dong, Mianxiong
    Leung, Victor C. M.
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (05) : 5555 - 5569
  • [32] Reinforcement-Learning- and Belief-Learning-Based Double Auction Mechanism for Edge Computing Resource Allocation
    Li, Quanyi
    Yao, Haipeng
    Mai, Tianle
    Jiang, Chunxiao
    Zhang, Yan
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (07): : 5976 - 5985
  • [33] Reinforcement learning based tasks offloading in vehicular edge computing networks
    Cao, Shaohua
    Liu, Di
    Dai, Congcong
    Wang, Chengqi
    Yang, Yansheng
    Zhang, Weishan
    Zheng, Danyang
    COMPUTER NETWORKS, 2023, 234
  • [34] Resource allocation for content distribution in IoT edge cloud computing environments using deep reinforcement learning
    Neelakantan, Puligundla
    Gangappa, Malige
    Rajasekar, Mummalaneni
    Kumar, Talluri Sunil
    Reddy, Gali Suresh
    JOURNAL OF HIGH SPEED NETWORKS, 2024, 30 (03) : 409 - 426
  • [35] A resource allocation strategy for internet of vehicles using reinforcement learning in edge computing environment
    Yihong Li
    Zhengli Liu
    Qi Tao
    Soft Computing, 2023, 27 : 3999 - 4009
  • [36] A resource allocation strategy for internet of vehicles using reinforcement learning in edge computing environment
    Li, Yihong
    Liu, Zhengli
    Tao, Qi
    SOFT COMPUTING, 2023, 27 (07) : 3999 - 4009
  • [37] Multiagent Deep-Reinforcement-Learning-Based Resource Allocation for Heterogeneous QoS Guarantees for Vehicular Networks
    Tian, Jie
    Liu, Qianqian
    Zhang, Haixia
    Wu, Dalei
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (03): : 1683 - 1695
  • [38] Asynchronous Deep Reinforcement Learning for Collaborative Task Computing and On-Demand Resource Allocation in Vehicular Edge Computing
    Liu L.
    Feng J.
    Mu X.
    Pei Q.
    Lan D.
    Xiao M.
    IEEE Transactions on Intelligent Transportation Systems, 2023, 24 (12) : 15513 - 15526
  • [39] Intelligence-based Reinforcement Learning for Continuous Dynamic Resource Allocation in Vehicular Networks
    Wang, Yuhang
    He, Ying
    Yu, F. Richard
    Wu, Kaishun
    2024 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC 2024, 2024,
  • [40] Adaptive Task Offloading in Vehicular Edge Computing Networks: a Reinforcement Learning Based Scheme
    Zhang, Jie
    Guo, Hongzhi
    Liu, Jiajia
    MOBILE NETWORKS & APPLICATIONS, 2020, 25 (05) : 1736 - 1745