Deadline-aware task offloading in vehicular networks using deep reinforcement learning

被引:5
作者
Farimani, Mina Khoshbazm [1 ]
Karimian-Aliabadi, Soroush [2 ]
Entezari-Maleki, Reza [1 ,3 ,4 ]
Egger, Bernhard [5 ]
Sousa, Leonel [4 ]
机构
[1] Iran Univ Sci & Technol, Sch Comp Engn, Tehran, Iran
[2] Sharif Univ Technol, Dept Comp Engn, Tehran, Iran
[3] Inst Res Fundamental Sci IPM, Sch Comp Sci, Tehran, Iran
[4] Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal
[5] Seoul Natl Univ, Dept Comp Sci & Engn, Seoul, South Korea
关键词
Computation offloading; Vehicular edge computing; Deep reinforcement learning; Deep Q-learning; Internet of vehicles; RESOURCE-ALLOCATION; EDGE; FRAMEWORK; RADIO;
D O I
10.1016/j.eswa.2024.123622
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Smart vehicles have a rising demand for computation resources, and recently vehicular edge computing has been recognized as an effective solution. Edge servers deployed in roadside units are capable of accomplishing tasks beyond the capacity which is embedded inside the vehicles. However, the main challenge is to carefully select the tasks to be offloaded considering the deadlines, and in order to reduce energy consumption, while delivering a good performance. In this paper, we consider a vehicular edge computing network in which multiple cars are moving at non-constant speed and produce tasks at each time slot. Then, we propose a task offloading algorithm, aware of the vehicle's direction, based on Rainbow, a deep Q-learning algorithm combining several independent improvements to the deep Q-network algorithm. This is to overcome the conventional limits and to reach an optimal offloading policy, by effectively incorporating the computation resources of edge servers to jointly minimize average delay and energy consumption. Real -world traffic data is used to evaluate the performance of the proposed approach compared to other algorithms, in particular deep Q-network, double deep Q-network, and deep recurrent Q-network. Results of the experiments show an average reduction of 18% and 15% in energy consumption and delay, respectively, when using the proposed Rainbow deep Q-network based algorithm in comparison to the state -of -the -art. Moreover, the stability and convergence of the learning process have significantly improved by adopting the Rainbow algorithm.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Deadline-Aware Offloading for High-Throughput Accelerators
    Yeh, Tsung Tai
    Sinclair, Matthew D.
    Beckmann, Bradford M.
    Rogers, Timothy G.
    2021 27TH IEEE INTERNATIONAL SYMPOSIUM ON HIGH-PERFORMANCE COMPUTER ARCHITECTURE (HPCA 2021), 2021, : 479 - 492
  • [32] Meta learning-based deep reinforcement learning algorithm for task offloading in dynamic vehicular network
    Liu, Liang
    Jing, Tengxiang
    Li, Wenwei
    Duan, Jie
    Mao, Wuping
    Liu, Huan
    Liu, Guanyu
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 143
  • [33] Reinforcement-Learning-Based Deadline Constrained Task Offloading Schema for Energy Saving in Vehicular Edge Computing System
    Do Bao Son
    Hiep Khac Vo
    Ta Huu Binh
    Tran Hoang Hai
    Binh Minh Nguyen
    Huynh Thi Thanh Binh
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [34] Dependency-aware online task offloading based on deep reinforcement learning for IoV
    Liu, Chunhong
    Wang, Huaichen
    Zhao, Mengdi
    Liu, Jialei
    Zhao, Xiaoyan
    Yuan, Peiyan
    JOURNAL OF CLOUD COMPUTING-ADVANCES SYSTEMS AND APPLICATIONS, 2024, 13 (01):
  • [35] Deep Reinforcement Learning-Based Task Offloading and Load Balancing for Vehicular Edge Computing
    Wu, Zhoupeng
    Jia, Zongpu
    Pang, Xiaoyan
    Zhao, Shan
    ELECTRONICS, 2024, 13 (08)
  • [36] Deep Learning-Based Task Offloading for Vehicular Edge Computing
    Zeng, Feng
    Liu, Chengsheng
    Tangjiang, Junzhe
    Li, Wenjia
    WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS, WASA 2021, PT III, 2021, 12939 : 291 - 298
  • [37] Vehicle Speed Aware Computing Task Offloading and Resource Allocation Based on Multi-Agent Reinforcement Learning in a Vehicular Edge Computing Network
    Huang, Xinyu
    He, Lijun
    Zhang, Wanyue
    2020 IEEE INTERNATIONAL CONFERENCE ON EDGE COMPUTING (EDGE 2020), 2020, : 1 - 8
  • [38] Multiagent Deep Reinforcement Learning for Vehicular Computation Offloading in IoT
    Zhu, Xiaoyu
    Luo, Yueyi
    Liu, Anfeng
    Bhuiyan, Md Zakirul Alam
    Zhang, Shaobo
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (12) : 9763 - 9773
  • [39] Hybrid Multi-Server Computation Offloading in Air-Ground Vehicular Networks Empowered by Federated Deep Reinforcement Learning
    Song, Xiaoqin
    Chen, Quan
    Wang, Shumo
    Song, Tiecheng
    Xu, Lei
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2024, 11 (06): : 5175 - 5189
  • [40] Asynchronous Deep Reinforcement Learning for Data-Driven Task Offloading in MEC-Empowered Vehicular Networks
    Dai, Penglin
    Hu, Kaiwen
    Wu, Xiao
    Xing, Huanlai
    Yu, Zhaofei
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2021), 2021,