Task Offloading and Resource Allocation Based on Reinforcement Learning and Load Balancing in Vehicular Networking

被引:0
作者
Tian, Shujuan [1 ,2 ]
Xiang, Shuhuan [1 ,2 ,3 ]
Zhou, Ziqi [1 ,2 ]
Dai, Haipeng [4 ]
Yu, Enze
Deng, Qingyong [5 ,6 ]
机构
[1] Xiangtan Univ, Sch Comp Sci, Sch Cyberspace Secur, Key Lab Hunan Prov Internet Things & Informat Secu, Xiangtan 411105, Peoples R China
[2] Xiangtan Univ, Hunan Int Sci & Technol Cooperat Base Intelligent, Xiangtan 411105, Peoples R China
[3] Xiangtan Univ, Sch Comp Sci, Key Lab Hunan Prov Internet Things & Informat Secu, Hunan Int Sci & Technol Cooperat Base Intelligent, Xiangtan 411105, Peoples R China
[4] Nanjing Univ, Sch Comp Sci & Technol, Nanjing 211189, Peoples R China
[5] Minist Educ, Key Lab Educ Blockchain & Intelligent Technol, Guilin 541004, Peoples R China
[6] Guangxi Normal Univ, Guangxi Key Lab MultiSource Informat Min & Secur, Guilin 541004, Peoples R China
基金
中国国家自然科学基金;
关键词
Resource management; Servers; Heuristic algorithms; Optimization; Load management; Vehicle dynamics; Convergence; Training; Quality of service; Load modeling; Multi-access edge computing; Internet of Vehicles; task offloading; resource allocation; load balancing; reinforcement learning; EDGE; FRAMEWORK; MANAGEMENT;
D O I
10.1109/TCE.2025.3542133
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Due to limited on-board resources and the mobility characteristics of vehicles in a multi-access edge computing (MEC)-based vehicular network, efficient task offloading and resource allocation schemes are essential for achieving low-latency and low-energy consumption applications in the Internet of Vehicles (IoV). The spatial distribution of vehicles, influenced by various factors, leads to significant workload variations across MEC servers. In this paper, we address task offloading and resource allocation as a joint optimization problem and propose a Load-Balancing Deep Deterministic Policy Gradient (LBDDPG) algorithm to achieve optimal results. The joint optimization problem is modeled as a Markov Decision Process (MDP), enabling the LBDDPG algorithm to systematically address the challenges of workload imbalance and resource inefficiency. The algorithm incorporates a load optimization strategy to balance workload distribution across MEC servers, mitigating disparities caused by uneven vehicle distributions. The reward function is designed to account for both energy consumption and delay, ensuring an optimal trade-off between these critical factors. To enhance training efficiency, a noise-based exploration strategy is employed, preventing ineffective exploration during the early stages. Additionally, constraints such as computational capacity and latency thresholds are embedded to ensure the algorithm's practical applicability. Experimental results demonstrate that the proposed LBDDPG algorithm achieves faster convergence and superior performance in terms of energy consumption and latency compared to other reinforcement learning algorithms.
引用
收藏
页码:2217 / 2230
页数:14
相关论文
共 38 条
[21]   Deep Reinforcement Learning for Offloading and Resource Allocation in Vehicle Edge Computing and Networks [J].
Liu, Yi ;
Yu, Huimin ;
Xie, Shengli ;
Zhang, Yan .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (11) :11158-11168
[22]   Optimal Resource Allocation and Task Segmentation in IoT Enabled Mobile Edge Cloud [J].
Mahmood, Asad ;
Hong, Yue ;
Ehsan, Muhammad Khurram ;
Mumtaz, Shahid .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2021, 70 (12) :13294-13303
[23]   A Survey on Mobile Edge Computing: The Communication Perspective [J].
Mao, Yuyi ;
You, Changsheng ;
Zhang, Jun ;
Huang, Kaibin ;
Letaief, Khaled B. .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2017, 19 (04) :2322-2358
[24]  
Muniswamaiah Manoj, 2021, 2021 8th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/2021 7th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom), P139, DOI 10.1109/CSCloud-EdgeCom52276.2021.00034
[25]   Semi-Distributed Resource Management in UAV-Aided MEC Systems: A Multi-Agent Federated Reinforcement Learning Approach [J].
Nie, Yiwen ;
Zhao, Junhui ;
Gao, Feifei ;
Yu, F. Richard .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2021, 70 (12) :13162-13173
[26]   Deep Reinforcement Learning Based Resource Management for Multi-Access Edge Computing in Vehicular Networks [J].
Peng, Haixia ;
Shen, Xuemin .
IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2020, 7 (04) :2416-2428
[27]   DoSRA: A Decentralized Approach to Online Edge Task Scheduling and Resource Allocation [J].
Peng, Qinglan ;
Wu, Chunrong ;
Xia, Yunni ;
Ma, Yong ;
Wang, Xu ;
Jiang, Ning .
IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (06) :4677-4692
[28]   Knowledge-Driven Service Offloading Decision for Vehicular Edge Computing: A Deep Reinforcement Learning Approach [J].
Qi, Qi ;
Wang, Jingyu ;
Ma, Zhanyu ;
Sun, Haifeng ;
Cao, Yufei ;
Zhang, Lingxin ;
Liao, Jianxin .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (05) :4192-4203
[29]   DMRO: A Deep Meta Reinforcement Learning-Based Task Offloading Framework for Edge-Cloud Computing [J].
Qu, Guanjin ;
Wu, Huaming ;
Li, Ruidong ;
Jiao, Pengfei .
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2021, 18 (03) :3448-3459
[30]   State-of-the-Art and Research Opportunities for Next-Generation Consumer Electronics [J].
Wu, Chung Kit ;
Cheng, Chi-Tsun ;
Uwate, Yoko ;
Chen, Guanrong ;
Mumtaz, Shahid ;
Tsang, Kim Fung .
IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2023, 69 (04) :937-948