Task Offloading and Resource Allocation Based on Reinforcement Learning and Load Balancing in Vehicular Networking

被引:0
作者
Tian, Shujuan [1 ,2 ]
Xiang, Shuhuan [1 ,2 ,3 ]
Zhou, Ziqi [1 ,2 ]
Dai, Haipeng [4 ]
Yu, Enze
Deng, Qingyong [5 ,6 ]
机构
[1] Xiangtan Univ, Sch Comp Sci, Sch Cyberspace Secur, Key Lab Hunan Prov Internet Things & Informat Secu, Xiangtan 411105, Peoples R China
[2] Xiangtan Univ, Hunan Int Sci & Technol Cooperat Base Intelligent, Xiangtan 411105, Peoples R China
[3] Xiangtan Univ, Sch Comp Sci, Key Lab Hunan Prov Internet Things & Informat Secu, Hunan Int Sci & Technol Cooperat Base Intelligent, Xiangtan 411105, Peoples R China
[4] Nanjing Univ, Sch Comp Sci & Technol, Nanjing 211189, Peoples R China
[5] Minist Educ, Key Lab Educ Blockchain & Intelligent Technol, Guilin 541004, Peoples R China
[6] Guangxi Normal Univ, Guangxi Key Lab MultiSource Informat Min & Secur, Guilin 541004, Peoples R China
基金
中国国家自然科学基金;
关键词
Resource management; Servers; Heuristic algorithms; Optimization; Load management; Vehicle dynamics; Convergence; Training; Quality of service; Load modeling; Multi-access edge computing; Internet of Vehicles; task offloading; resource allocation; load balancing; reinforcement learning; EDGE; FRAMEWORK; MANAGEMENT;
D O I
10.1109/TCE.2025.3542133
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Due to limited on-board resources and the mobility characteristics of vehicles in a multi-access edge computing (MEC)-based vehicular network, efficient task offloading and resource allocation schemes are essential for achieving low-latency and low-energy consumption applications in the Internet of Vehicles (IoV). The spatial distribution of vehicles, influenced by various factors, leads to significant workload variations across MEC servers. In this paper, we address task offloading and resource allocation as a joint optimization problem and propose a Load-Balancing Deep Deterministic Policy Gradient (LBDDPG) algorithm to achieve optimal results. The joint optimization problem is modeled as a Markov Decision Process (MDP), enabling the LBDDPG algorithm to systematically address the challenges of workload imbalance and resource inefficiency. The algorithm incorporates a load optimization strategy to balance workload distribution across MEC servers, mitigating disparities caused by uneven vehicle distributions. The reward function is designed to account for both energy consumption and delay, ensuring an optimal trade-off between these critical factors. To enhance training efficiency, a noise-based exploration strategy is employed, preventing ineffective exploration during the early stages. Additionally, constraints such as computational capacity and latency thresholds are embedded to ensure the algorithm's practical applicability. Experimental results demonstrate that the proposed LBDDPG algorithm achieves faster convergence and superior performance in terms of energy consumption and latency compared to other reinforcement learning algorithms.
引用
收藏
页码:2217 / 2230
页数:14
相关论文
共 38 条
[1]   A Taxonomy and Survey of Edge Cloud Computing for Intelligent Transportation Systems and Connected Vehicles [J].
Arthurs, Peter ;
Gillam, Lee ;
Krause, Paul ;
Wang, Ning ;
Halder, Kaushik ;
Mouzakitis, Alexandros .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (07) :6206-6221
[2]   Multi-Objective Computation Sharing in Energy and Delay Constrained Mobile Edge Computing Environments [J].
Bozorgchenani, Arash ;
Mashhadi, Farshad ;
Tarchi, Daniele ;
Monroy, Sergio A. Salinas .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2021, 20 (10) :2992-3005
[3]   Mobility-Aware Offloading and Resource Allocation for Distributed Services Collaboration [J].
Chen, Haowei ;
Deng, Shuiguang ;
Zhu, Hongze ;
Zhao, Hailiang ;
Jiang, Rong ;
Dustdar, Schahram ;
Zomaya, Albert Y. .
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (10) :2428-2443
[4]   A DRL Agent for Jointly Optimizing Computation Offloading and Resource Allocation in MEC [J].
Chen, Juan ;
Xing, Huanlai ;
Xiao, Zhiwen ;
Xu, Lexi ;
Tao, Tao .
IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (24) :17508-17524
[5]   Deep Reinforcement Learning-Based Dynamic Resource Management for Mobile Edge Computing in Industrial Internet of Things [J].
Chen, Ying ;
Liu, Zhiyong ;
Zhang, Yongchao ;
Wu, Yuan ;
Chen, Xin ;
Zhao, Lian .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2021, 17 (07) :4925-4934
[6]   SDN-Enabled Traffic-Aware Load Balancing for M2M Networks [J].
Chen, Yu-Jia ;
Wang, Li-Chun ;
Chen, Meng-Chieh ;
Huang, Pin-Man ;
Chung, Pei-Jung .
IEEE INTERNET OF THINGS JOURNAL, 2018, 5 (03) :1797-1806
[7]   Learning-Based Decentralized Offloading Decision Making in an Adversarial Environment [J].
Cho, Byungjin ;
Xiao, Yu .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2021, 70 (11) :11308-11323
[8]   A Review of Motion Planning for Highway Autonomous Driving [J].
Claussmann, Laurene ;
Revilloud, Marc ;
Gruyer, Dominique ;
Glaser, Sebastien .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2020, 21 (05) :1826-1848
[9]   A Load-Balancing Mechanism for Distributed SDN Control Plane Using Response Time [J].
Cui, Jie ;
Lu, Qinghe ;
Zhong, Hong ;
Tian, Miaomiao ;
Liu, Lu .
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2018, 15 (04) :1197-1206
[10]   Resource Pricing and Allocation in MEC Enabled Blockchain Systems: An A3C Deep Reinforcement Learning Approach [J].
Du, Jianbo ;
Cheng, Wenjie ;
Lu, Guangyue ;
Cao, Haotong ;
Chu, Xiaoli ;
Zhang, Zhicai ;
Wang, Junxuan .
IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2022, 9 (01) :33-44