Task Offloading and Resource Allocation in Vehicular Networks: A Lyapunov-Based Deep Reinforcement Learning Approach

被引:37
作者
Kumar, Anitha Saravana [1 ]
Zhao, Lian [1 ]
Fernando, Xavier [1 ]
机构
[1] Toronto Metropolitan Univ, Dept Elect Comp Biomed Engn, Toronto, ON M5B 2K3, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Lyapunov optimization; multi-agent DDPG; reinforcement learning; resource management; VEC; vehicle edge computing; vehicular networks; COMPUTING NETWORKS; OPTIMIZATION;
D O I
10.1109/TVT.2023.3271613
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Vehicular Edge Computing (VEC) has gained popularity due to its ability to enhance vehicular networks. VEC servers located at Roadside Units (RSUs) allow low-power vehicles to offload computation-intensive and delay-sensitive applications, making it a promising solution. However, optimal resource allocation between edge servers is a complex issue due to vehicle mobility and dynamic data traffic. To address this issue, we propose a Lyapunov-based Multi-Agent Deep Deterministic Policy Gradient (L-MADDPG) method that jointly optimizes computing task distribution and radio resource allocation to minimize energy consumption and delay requirements. We evaluate the trade-offs between the performance of the optimization algorithm, queuing model, and energy consumption. We first examine delay, queue and energy models for task execution at the vehicle or RSU, followed by the L-MADDPG algorithm for jointly optimizing task offloading and resource allocation problems to reduce energy consumption without compromising performance. Our simulation results show that our algorithm can reduce energy consumption while maintaining system performance compared to existing algorithms.
引用
收藏
页码:13360 / 13373
页数:14
相关论文
共 46 条
[1]   Game Theory and Lyapunov Optimization for Cloud-Based Content Delivery Networks With Device-to-Device and UAV-Enabled Caching [J].
Asheralieva, Alia ;
Niyato, Dusit .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (10) :10094-10110
[2]   Joint Rate Control and Power Allocation for Non-Orthogonal Multiple Access Systems [J].
Bao, Wei ;
Chen, He ;
Li, Yonghui ;
Vucetic, Branka .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2017, 35 (12) :2798-2811
[3]   Energy-Efficient Task Offloading and Resource Allocation via Deep Reinforcement Learning for Augmented Reality in Mobile Edge Networks [J].
Chen, Xing ;
Liu, Guizhong .
IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (13) :10843-10856
[4]   Contention Intensity Based Distributed Coordination for V2V Safety Message Broadcast [J].
Gao, Jie ;
Li, Mushu ;
Zhao, Lian ;
Shen, Xuemin .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2018, 67 (12) :12288-12301
[5]   Joint Vehicular and Static Users Multiplexing Transmission With Hierarchical Modulation for Throughput Maximization in Vehicular Networks [J].
Gao, Qian ;
Lin, Siyu ;
Zhu, Gang .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2020, 21 (09) :3835-3847
[6]   Fast Adaptive Task Offloading and Resource Allocation via Multiagent Reinforcement Learning in Heterogeneous Vehicular Fog Computing [J].
Gao, Zhen ;
Yang, Lei ;
Dai, Yu .
IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (08) :6818-6835
[7]  
Henderson P, 2018, AAAI CONF ARTIF INTE, P3207
[8]   Energy Efficiency and Delay Tradeoff in an MEC-Enabled Mobile IoT Network [J].
Hu, Han ;
Song, Weiwei ;
Wang, Qun ;
Hu, Rose Qingyang ;
Zhu, Hongbo .
IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (17) :15942-15956
[9]   Deep Reinforcement Learning for Online Computation Offloading in Wireless Powered Mobile-Edge Computing Networks [J].
Huang, Liang ;
Bi, Suzhi ;
Zhang, Ying-Jun Angela .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2020, 19 (11) :2581-2593
[10]   Revenue and Energy Efficiency-Driven Delay-Constrained Computing Task Offloading and Resource Allocation in a Vehicular Edge Computing Network: A Deep Reinforcement Learning Approach [J].
Huang, Xinyu ;
He, Lijun ;
Chen, Xing ;
Wang, Liejun ;
Li, Fan .
IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (11) :8852-8868