UAV-Assisted Heterogeneous Multi-Server Computation Offloading With Enhanced Deep Reinforcement Learning in Vehicular Networks

被引:0
|
作者
Song, Xiaoqin [1 ,2 ]
Zhang, Wenjing [1 ]
Lei, Lei [1 ]
Zhang, Xinting [1 ]
Zhang, Lijuan [1 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Elect & Informat Engn, Nanjing 210016, Peoples R China
[2] Nanjing Univ Posts & Telecommun, Key Lab Broadband Wireless Commun & Sensor Network, Minist Educ, Nanjing 210003, Peoples R China
来源
IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING | 2024年 / 11卷 / 06期
基金
中国国家自然科学基金;
关键词
Servers; Task analysis; Delays; TV; Autonomous aerial vehicles; Vehicle dynamics; Costs; Computation offloading; deep reinforcement learning; Internet of Vehicles; multi-access edge computing (MEC); resource allocation; RESOURCE-ALLOCATION; EDGE; ACCESS; FOG;
D O I
10.1109/TNSE.2024.3446667
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
With the development of intelligent transportation systems (ITS), computation-intensive and latency-sensitive applications are flourishing, posing significant challenges to resource-constrained task vehicles (TVEs). Multi-access edge computing (MEC) is recognized as a paradigm that addresses these issues by deploying hybrid servers at the edge and seamlessly integrating computing capabilities. Additionally, flexible unmanned aerial vehicles (UAVs) serve as relays to overcome the problem of non-line-of-sight (NLoS) propagation in vehicle-to-vehicle (V2V) communications. In this paper, we propose a UAV-assisted heterogeneous multi-server computation offloading (HMSCO) scheme. Specifically, our optimization objective to minimize the cost, measured by a weighted sum of delay and energy consumption, under the constraints of reliability requirements, tolerable delay, and computing resource limits, among others. Since the problem is non-convex, it is further decomposed into two sub-problems. First, a game-based binary offloading decision (BOD) is employed to determine whether to offload based on the parameters of computing tasks and networks. Then, a multi-agent enhanced dueling double deep Q-network (ED3QN) with centralized training and distributed execution is introduced to optimize server offloading decision and resource allocation. Simulation results demonstrate the good convergence and robustness of the proposed algorithm in a highly dynamic vehicular environment.
引用
收藏
页码:5323 / 5335
页数:13
相关论文
共 50 条
  • [41] Intelligent Task Offloading in Vehicular Networks: A Deep Reinforcement Learning Perspective
    Fofana, Namory
    Ben Letaifa, Asma
    Rachedi, Abderrezak
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2025, 74 (01) : 201 - 216
  • [42] Resource Allocation for UAV-Assisted IoT Networks with Energy Harvesting and Computation Offloading
    Xu, Hao
    Pan, Cunhua
    Wang, Kezhi
    Chen, Ming
    Nallanathan, Arumugam
    2019 11TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING (WCSP), 2019,
  • [43] Computation offloading over multi-UAV MEC network: A distributed deep reinforcement learning approach
    Wei, Dawei
    Ma, Jianfeng
    Luo, Linbo
    Wang, Yunbo
    He, Lei
    Li, Xinghua
    COMPUTER NETWORKS, 2021, 199 (199)
  • [44] Joint Distributed Computation Offloading and Radio Resource Slicing Based on Reinforcement Learning in Vehicular Networks
    Alaghbari, Khaled A.
    Lim, Heng-Siong
    Zarakovitis, Charilaos C.
    Latiff, N. M. Abdul
    Ariffin, Sharifah Hafizah Syed
    Chien, Su Fong
    IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY, 2025, 6 : 1231 - 1245
  • [45] Deep-Reinforcement-Learning-Based Distributed Computation Offloading in Vehicular Edge Computing Networks
    Geng, Liwei
    Zhao, Hongbo
    Wang, Jiayue
    Kaushik, Aryan
    Yuan, Shuai
    Feng, Wenquan
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (14) : 12416 - 12433
  • [46] Optimizing Energy Efficiency in Vehicular Edge-Cloud Networks Through Deep Reinforcement Learning-Based Computation Offloading
    Elgendy, Ibrahim A.
    Muthanna, Ammar
    Alshahrani, Abdullah
    Hassan, Dina S. M.
    Alkanhel, Reem
    Elkawkagy, Mohamed
    IEEE ACCESS, 2024, 12 : 191537 - 191550
  • [47] Computation Offloading and Trajectory Planning of Multi-UAV-Enabled MEC: A Knowledge-Assisted Multiagent Reinforcement Learning Approach
    Li, Xulong
    Qin, Yunhui
    Huo, Jiahao
    Wei, Huangfu
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (05) : 7077 - 7088
  • [48] UAV-assisted task offloading for IoT in smart buildings and environment via deep reinforcement learning
    Xu, Jiajie
    Li, Dejuan
    Gu, Wei
    Chen, Ying
    BUILDING AND ENVIRONMENT, 2022, 222
  • [49] Computation Migration and Resource Allocation in Heterogeneous Vehicular Networks: A Deep Reinforcement Learning Approach
    Wang, Hui
    Ke, Hongchang
    Liu, Gang
    Sun, Weijia
    IEEE ACCESS, 2020, 8 : 171140 - 171153
  • [50] Towards Efficient Task Offloading With Dependency Guarantees in Vehicular Edge Networks Through Distributed Deep Reinforcement Learning
    Liu, Haoqiang
    Huang, Wenzheng
    Kim, Dong In
    Sun, Sumei
    Zeng, Yonghong
    Feng, Shaohan
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (09) : 13665 - 13681