Intelligent Driving Task Scheduling Service in Vehicle-Edge Collaborative Networks Based on Deep Reinforcement Learning

被引:1
作者
Wang, Nuanlai [1 ]
Pang, Shanchen [1 ]
Ji, Xiaofeng [1 ]
Wang, Min [2 ]
Qiao, Sibo [3 ]
Yu, Shihang [4 ]
机构
[1] China Univ Petr East China, Coll Comp Sci & Technol, Qingdao 266580, Peoples R China
[2] Yangzhou Univ, Coll Informat Engn, Yangzhou 225127, Jiangsu, Peoples R China
[3] Tiangong Univ, Coll Software, Tianjin 300387, Peoples R China
[4] Tiangong Univ, Sch Mech Engn, Tianjin 300387, Peoples R China
来源
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT | 2024年 / 21卷 / 04期
关键词
Task analysis; Computational modeling; Processor scheduling; Collaboration; 6G mobile communication; Servers; Optimization; 6G Internet of Vehicles; task offloading; edge computing; collaborative computing; deep reinforcement learning;
D O I
10.1109/TNSM.2024.3409557
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the evolution of 6G technology, mobile edge computing is rapidly advancing as a crucial application scenario. This research presents an innovative method for vehicle-edge task offloading decision-making, leveraging real-time data inputs such as channel conditions, image entropy, and detector confidence levels. We propose a collaborative task processing framework for vehicle-edge computing that effectively combines lightweight and heavyweight models to cater to varying demands, ensuring efficient task execution. Additionally, the study introduces a custom-designed reinforcement learning algorithm aimed explicitly at optimizing offloading scheduling. This algorithm boosts decision-making accuracy and efficiency and features a comprehensive reward system to achieve a balanced trade-off between detection performance and latency. The framework's efficacy is thoroughly evaluated in complex driving scenarios using the SODA10M dataset. Our results indicate the framework's capability to achieve convergence, enhance precision, ensure stability, and maintain a lightweight operation, emphasizing its suitability for real-world implementation. This work provides practical and efficient strategies for intelligent driving task scheduling to meet the requirements of contemporary dynamic environments.
引用
收藏
页码:4357 / 4368
页数:12
相关论文
共 39 条
[1]   From Reinforcement Learning to Deep Reinforcement Learning: An Overview [J].
Agostinelli, Forest ;
Hocquet, Guillaume ;
Singh, Sameer ;
Baldi, Pierre .
BRAVERMAN READINGS IN MACHINE LEARNING: KEY IDEAS FROM INCEPTION TO CURRENT STATE, 2018, 11100 :298-328
[2]  
[Anonymous], 2022, Syst., V23, P740
[3]  
Arulkumaran K., 2017, Deep reinforcement learning a brief survey,IEEE Signal Process.Mag., vol. 34, no. 6, pp
[4]   Resource-aware multi-task offloading and dependency-aware scheduling for integrated edge-enabled IoV [J].
Awada, Uchechukwu ;
Zhang, Jiankang ;
Chen, Sheng ;
Li, Shuangzhi ;
Yang, Shouyi .
JOURNAL OF SYSTEMS ARCHITECTURE, 2023, 141
[5]   Real-Time Edge Classification: Optimal Offloading under Token Bucket Constraints [J].
Chakrabarti, Ayan ;
Guerin, Roch ;
Lu, Chenyang ;
Liu, Jiangnan .
2021 ACM/IEEE 6TH SYMPOSIUM ON EDGE COMPUTING (SEC 2021), 2021, :41-54
[6]   SCHE2MA: Scalable, Energy-Aware, Multidomain Orchestration for Beyond-5G URLLC Services [J].
Dalgkitsis, Anestis ;
Garrido, Luis A. ;
Rezazadeh, Farhad ;
Chergui, Hatim ;
Ramantas, Kostas ;
Vardakas, John S. ;
Verikoukis, Christos .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (07) :7653-7663
[7]   BottleNet: A Deep Learning Architecture for Intelligent Mobile Cloud Computing Services [J].
Eshratifar, Amir Erfan ;
Esmaili, Amirhossein ;
Pedram, Massoud .
2019 IEEE/ACM INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN (ISLPED), 2019,
[8]  
Fu X., 2021, P INT SEM ART INT NE, P572
[9]  
Fujiwara I, 2018, INT C PAR DISTRIB SY, P389, DOI [10.1109/ICPADS.2018.00059, 10.1109/PADSW.2018.8644536]
[10]  
Han JH, 2021, Arxiv, DOI arXiv:2106.11118