Joint Driving Mode Selection and Resource Management in Vehicular Edge Computing Networks

被引:0
作者
Yang, Chao [1 ,2 ]
Chen, Jihuang [1 ,2 ]
Huang, Xumin [1 ,2 ]
Lian, Jianyu [1 ,2 ]
Tang, Yanqun [3 ]
Chen, Xin [1 ,2 ]
Xie, Shengli [1 ,2 ]
机构
[1] Guangdong Univ Technol, Sch Automat, Guangzhou 510006, Peoples R China
[2] Guangdong Univ Technol, Guangdong Prov Key Lab Intelligent Syst & Optimiza, Guangzhou 510006, Peoples R China
[3] Sun Yat Sen Univ, Sch Elect & Commun Engn, Shenzhen 518107, Peoples R China
基金
中国国家自然科学基金;
关键词
TV; Resource management; Optimization; Roads; Vehicle dynamics; Computational modeling; Dynamic scheduling; Edge computing; Servers; Computational efficiency; Driving mode selection; hierarchical reinforcement learning (HRL); resource management; terminal-server matching;
D O I
10.1109/JIOT.2025.3545747
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Connected and automated vehicles (CAVs) have emerged as an efficient solution to improve the driving experience in the intelligent transportation systems (ITSs), in which the targeted vehicle (TV) can switch between the human-driven (HD) and autonomous-driven (AD) modes to act as server or terminal in vehicular edge computing networks (VECNs). However, due to the dynamic nature of traffic networks and the moving of vehicles, distribution of computational resources is imbalanced and variable, it is a challenge to design the cooperative resource management scheme for the whole journey of vehicle users. In this article, we propose a joint driving model selection and resource management scheme for TV in each road segment, to maximize the vehicle users' satisfaction of the whole journey. For the complex formulated joint optimization problem, we design a three-stage hierarchical optimization (3SHO) framework, using deep Q-network (DQN) for driving mode optimization in the first stage and deep deterministic policy gradient (DDPG) for optimizing resource management under different selected driving modes. And a terminal-server matching mechanism is introduced to enable dynamic service quality improvement for TV. Specially, we design a new user satisfaction function with the quality of service, traffic revenue, and the gap between expected and actual revenues of users are considered. Experimental results showcase the robust convergence of the 3SHO algorithm, the adeptness to dynamic traffic networks, and the capacity to enhance user satisfaction significantly.
引用
收藏
页码:20448 / 20461
页数:14
相关论文
共 45 条
[21]   A Hierarchical Reinforcement Learning Algorithm Based on Attention Mechanism for UAV Autonomous Navigation [J].
Liu, Zun ;
Cao, Yuanqiang ;
Chen, Jianyong ;
Li, Jianqiang .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (11) :13309-13320
[22]   Toward Secure and Trustworthy Vehicular Fog Computing: A Survey [J].
Nazih, Ossama ;
Benamar, Nabil ;
Lamaazi, Hanane ;
Choaui, Habiba .
IEEE ACCESS, 2024, 12 :35154-35171
[23]   Hierarchical Reinforcement Learning: A Comprehensive Survey [J].
Pateria, Shubham ;
Subagdja, Budhitama ;
Tan, Ah-hwee ;
Quek, Chai .
ACM COMPUTING SURVEYS, 2021, 54 (05)
[24]  
Qing G, 2017, CHIN CONT DECIS CONF, P7138, DOI 10.1109/CCDC.2017.7978471
[25]   Enabling Efficient Scheduling in Large-Scale UAV-Assisted Mobile-Edge Computing via Hierarchical Reinforcement Learning [J].
Ren, Tao ;
Niu, Jianwei ;
Dai, Bin ;
Liu, Xuefeng ;
Hu, Zheyuan ;
Xu, Mingliang ;
Guizani, Mohsen .
IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (10) :7095-7109
[26]   Multi-Access Edge Computing-Based Vehicle-Vehicle-RSU Data Offloading Over the Multi-RSU-Overlapped Environment [J].
Shih-Yang Lin ;
Chung-Ming Huang ;
Tzu-Yu Wu .
IEEE OPEN JOURNAL OF INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 3 :7-32
[27]   Hierarchical Deep Reinforcement Learning for Joint Service Caching and Computation Offloading in Mobile Edge-Cloud Computing [J].
Sun, Chuan ;
Li, Xiuhua ;
Wang, Chenyang ;
He, Qiang ;
Wang, Xiaofei ;
Leung, Victor C. M. .
IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (04) :1548-1564
[28]   Joint Service Deployment and Task Scheduling for Satellite Edge Computing: A Two-Timescale Hierarchical Approach [J].
Tang, Qinqin ;
Xie, Renchao ;
Fang, Zeru ;
Huang, Tao ;
Chen, Tianjiao ;
Zhang, Ran ;
Yu, F. Richard .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2024, 42 (05) :1063-1079
[29]   Networking and Communications in Autonomous Driving: A Survey [J].
Wang, Jiadai ;
Liu, Jiajia ;
Kato, Nei .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2019, 21 (02) :1243-1274
[30]   Fast Adaptive Task Offloading in Edge Computing Based on Meta Reinforcement Learning [J].
Wang, Jin ;
Hu, Jia ;
Min, Geyong ;
Zomaya, Albert Y. ;
Georgalas, Nektarios .
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2021, 32 (01) :242-253