Cloud-Edge-End Collaborative Task Offloading in Vehicular Edge Networks: A Multilayer Deep Reinforcement Learning Approach

被引:0
|
作者
Wu, Jiaqi [1 ,2 ]
Tang, Ming [3 ]
Jiang, Changkun [4 ]
Gao, Lin [1 ,2 ]
Cao, Bin [1 ,2 ]
机构
[1] Harbin Inst Technol, Sch Elect & Informat Engn, Shenzhen 518055, Peoples R China
[2] Harbin Inst Technol, Guangdong Prov Key Lab Aerosp Commun & Networking, Shenzhen 518055, Peoples R China
[3] Southern Univ Sci & Technol, Dept Comp Sci & Engn, Shenzhen 518055, Peoples R China
[4] Shenzhen Univ, Coll Comp Sci & Software Engn, Shenzhen 518060, Peoples R China
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 22期
基金
中国国家自然科学基金;
关键词
Servers; Cloud computing; Processor scheduling; Collaboration; Vehicle-to-infrastructure; Edge computing; Vehicular ad hoc networks; Resource management; Deep reinforcement learning; Decision making; Deep reinforcement learning (DRL); mobile-edge computing (MEC); task offloading; vehicular edge network (VEN); ALLOCATION;
D O I
10.1109/JIOT.2024.3472472
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Mobile-edge computing (MEC) is a promising computing scheme to support computation-intensive AI applications in vehicular networks, by enabling vehicles to offload computation tasks to edge computing servers deployed on road side units (RSUs) that approximate to them. In this work, we consider an MEC-enabled vehicular edge network (VEN), where each vehicle can offload tasks to edge/cloud computing servers via vehicle-to-infrastructure (V2I) links or to other end-vehicles via vehicle-to-vehicle (V2V) links. In such a cloud-edge-end collaborative offloading scenario, we focus on the joint task offloading, scheduling, and resource allocation problem for vehicles, which is challenging due to the online and asynchronous decision-making requirement for each task. To solve the problem, we propose a Multilayer deep reinforcement learning (DRL)-based approach, where each vehicle constructs and trains three modules to make different layers' decisions: 1) Offloading Module (first layer), determining whether to offload each task, by using the dueling and double deep Q-network (D3QN) framework; 2) Scheduling Module (second layer), determining where and how to offload each task in the offloading queues, together with the transmission power, by using the parameterized deep Q-network (PDQN) framework; and 3) Computing Module (third layer), determining how much computing resource to be allocated for each task in the computation queues, by using classic optimization techniques. We provide the detailed algorithm design and perform extensive simulations to evaluate its performance. Simulation results show that our proposed algorithm outperforms the existing algorithms in the literature, and can reduce the average cost by 25.86%-72.51% and increase the average satisfaction rate by 3.48%-90.53%.
引用
收藏
页码:36272 / 36290
页数:19
相关论文
共 50 条
  • [1] Collaborative cloud-edge-end task offloading with task dependency based on deep reinforcement learning
    Tang, Tiantian
    Li, Chao
    Liu, Fagui
    COMPUTER COMMUNICATIONS, 2023, 209 : 78 - 90
  • [2] Collaborative Cloud-Edge-End Task Offloading in NOMA-Enabled Mobile Edge Computing Using Deep Learning
    RuiZhong Du
    Cui Liu
    Yan Gao
    PengNan Hao
    ZiYuan Wang
    Journal of Grid Computing, 2022, 20
  • [3] Collaborative Cloud-Edge-End Task Offloading in NOMA-Enabled Mobile Edge Computing Using Deep Learning
    Du, RuiZhong
    Liu, Cui
    Gao, Yan
    Hao, PengNan
    Wang, ZiYuan
    JOURNAL OF GRID COMPUTING, 2022, 20 (02)
  • [4] Collaborative Cloud-Edge-End Task Offloading in Mobile-Edge Computing Networks With Limited Communication Capability
    Kai, Caihong
    Zhou, Hao
    Yi, Yibo
    Huang, Wei
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2021, 7 (02) : 624 - 634
  • [5] Task offloading in vehicular edge computing networks via deep reinforcement learning
    Karimi, Elham
    Chen, Yuanzhu
    Akbari, Behzad
    COMPUTER COMMUNICATIONS, 2022, 189 : 193 - 204
  • [6] Collaborative Task Offloading Based on Deep Reinforcement Learning in Heterogeneous Edge Networks
    Du, Yupeng
    Huang, Zhenglei
    Yang, Shujie
    Xiao, Han
    20TH INTERNATIONAL WIRELESS COMMUNICATIONS & MOBILE COMPUTING CONFERENCE, IWCMC 2024, 2024, : 375 - 380
  • [7] Computational Offloading in Semantic-Aware Cloud-Edge-End Collaborative Networks
    Ji, Zelin
    Qin, Zhijin
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2024, 18 (07) : 1235 - 1248
  • [8] Malicious traffic detection for cloud-edge-end networks: A deep learning approach
    Liu, Hanbing
    Han, Fang
    Zhang, Yajuan
    COMPUTER COMMUNICATIONS, 2024, 215 : 150 - 156
  • [9] Deep Reinforcement Learning for Collaborative Offloading in Heterogeneous Edge Networks
    Nguyen, Dinh C.
    Pathirana, Pubudu N.
    Ding, Ming
    Seneviratne, Aruna
    21ST IEEE/ACM INTERNATIONAL SYMPOSIUM ON CLUSTER, CLOUD AND INTERNET COMPUTING (CCGRID 2021), 2021, : 297 - 303
  • [10] Collaborative Task Offloading in Vehicular Edge Computing Networks
    Sun, Geng
    Zhang, Jiayun
    Sun, Zemin
    He, Long
    Li, Jiahui
    2022 IEEE 19TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SMART SYSTEMS (MASS 2022), 2022, : 592 - 598