Real-Time Offloading for Dependent and Parallel Tasks in Cloud-Edge Environments Using Deep Reinforcement Learning

被引:7
|
作者
Chen, Xing [1 ,2 ,3 ]
Hu, Shengxi [1 ,2 ,3 ]
Yu, Chujia [1 ,2 ,3 ]
Chen, Zheyi [1 ,2 ,3 ]
Min, Geyong [4 ]
机构
[1] Fuzhou Univ, Coll Comp & Data Sci, Fuzhou 350116, Peoples R China
[2] Minist Educ, Engn Res Ctr Big Data Intelligence, Fuzhou 350002, Peoples R China
[3] Fuzhou Univ, Fujian Key Lab Network Comp & Intelligent Informat, Fuzhou 350116, Peoples R China
[4] Univ Exeter, Fac Environm Sci & Econ, Dept Comp Sci, Exeter EX4 4QF, England
基金
中国国家自然科学基金;
关键词
Task analysis; Mobile applications; Servers; Cloud computing; Real-time systems; Computational modeling; Heuristic algorithms; Cloud-edge computing; deep reinforcement learning; dependent and parallel tasks; real-time offloading; WORKFLOW;
D O I
10.1109/TPDS.2023.3349177
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
As an effective technique to relieve the problem of resource constraints on mobile devices (MDs), the computation offloading utilizes powerful cloud and edge resources to process the computation-intensive tasks of mobile applications uploaded from MDs. In cloud-edge computing, the resources (e.g., cloud and edge servers) that can be accessed by mobile applications may change dynamically. Meanwhile, the parallel tasks in mobile applications may lead to the huge solution space of offloading decisions. Therefore, it is challenging to determine proper offloading plans in response to such high dynamics and complexity in cloud-edge environments. The existing studies often preset the priority of parallel tasks to simplify the solution space of offloading decisions, and thus the proper offloading plans cannot be found in many cases. To address this challenge, we propose a novel real-time and Dependency-aware task Offloading method with Deep Q-networks (DODQ) in cloud-edge computing. In DODQ, mobile applications are first modeled as Directed Acyclic Graphs (DAGs). Next, the Deep Q-Networks (DQN) is customized to train the decision-making model of task offloading, aiming to quickly complete the decision-making process and generate new offloading plans when the environments change, which considers the parallelism of tasks without presetting the task priority when scheduling tasks. Simulation results show that the DODQ can well adapt to different environments and efficiently make offloading decisions. Moreover, the DODQ outperforms the state-of-art methods and quickly reaches the optimal/near-optimal performance.
引用
收藏
页码:391 / 404
页数:14
相关论文
共 50 条
  • [21] The Fusion of Deep Reinforcement Learning and Edge Computing for Real-time Monitoring and Control Optimization in IoT Environments
    Xu, Jingyu
    Wan, Weixiang
    Pan, Linying
    Sun, Wenjian
    Liu, Yuxiang
    2024 3RD INTERNATIONAL CONFERENCE ON ENERGY AND POWER ENGINEERING, CONTROL ENGINEERING, EPECE 2024, 2024, : 193 - 196
  • [22] Resource Allocation Strategy Using Deep Reinforcement Learning in Cloud-Edge Collaborative Computing Environment
    Cen, Junjie
    Li, Yongbo
    MOBILE INFORMATION SYSTEMS, 2022, 2022
  • [23] Distributed Real-Time Scheduling in Cloud Manufacturing by Deep Reinforcement Learning
    Zhang, Lixiang
    Yang, Chen
    Yan, Yan
    Hu, Yaoguang
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (12) : 8999 - 9007
  • [24] Cost-minimized User Association and Partial Offloading for Dependent Tasks in Hybrid Cloud-edge Systems
    Yuan, Haitao
    Hu, Qinglong
    Wang, Meijia
    Bi, Jing
    Zhou, MengChu
    2022 IEEE 18TH INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING (CASE), 2022, : 1059 - 1064
  • [25] A Deep Reinforcement Learning Approach for Efficient Image Processing Task Offloading in Edge-Cloud Collaborative Environments
    Sun, Ming
    Bao, Tie
    Xie, Dan
    Lv, Hengyi
    Si, Guoliang
    TRAITEMENT DU SIGNAL, 2023, 40 (04) : 1329 - 1339
  • [26] Deep reinforcement learning for energy and time optimized scheduling of precedence-constrained tasks in edge-cloud computing environments
    Jayanetti, Amanda
    Halgamuge, Saman
    Buyya, Rajkumar
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2022, 137 : 14 - 30
  • [27] Dependent Task Offloading for Edge Computing based on Deep Reinforcement Learning
    Wang, Jin
    Hu, Jia
    Min, Geyong
    Zhan, Wenhan
    Zomaya, Albert Y.
    Georgalas, Nektarios
    IEEE TRANSACTIONS ON COMPUTERS, 2022, 71 (10) : 2449 - 2461
  • [28] Real-time fire and smoke detection with transfer learning based on cloud-edge collaborative architecture
    Yang, Ming
    Qian, Songrong
    Wu, Xiaoqin
    IET IMAGE PROCESSING, 2024, 18 (12) : 3716 - 3728
  • [29] An approach for Offloading Divisible Tasks Using Double Deep Reinforcement Learning in Mobile Edge Computing Environment
    Kabdjou, Joelle
    Shinomiya, Norihiko
    2024 INTERNATIONAL TECHNICAL CONFERENCE ON CIRCUITS/SYSTEMS, COMPUTERS, AND COMMUNICATIONS, ITC-CSCC 2024, 2024,
  • [30] CoEdge: A Cooperative Edge System for Distributed Real-Time Deep Learning Tasks
    Jiang, Zhehao
    Ling, Neiwen
    Huang, Xuan
    Shi, Shuyao
    Wu, Chenhao
    Zhao, Xiaoguang
    Yan, Zhenyu
    Xing, Guoliang
    PROCEEDINGS OF THE 2023 THE 22ND INTERNATIONAL CONFERENCE ON INFORMATION PROCESSING IN SENSOR NETWORKS, IPSN 2023, 2023, : 53 - 66