Real-Time Offloading for Dependent and Parallel Tasks in Cloud-Edge Environments Using Deep Reinforcement Learning

被引:7
|
作者
Chen, Xing [1 ,2 ,3 ]
Hu, Shengxi [1 ,2 ,3 ]
Yu, Chujia [1 ,2 ,3 ]
Chen, Zheyi [1 ,2 ,3 ]
Min, Geyong [4 ]
机构
[1] Fuzhou Univ, Coll Comp & Data Sci, Fuzhou 350116, Peoples R China
[2] Minist Educ, Engn Res Ctr Big Data Intelligence, Fuzhou 350002, Peoples R China
[3] Fuzhou Univ, Fujian Key Lab Network Comp & Intelligent Informat, Fuzhou 350116, Peoples R China
[4] Univ Exeter, Fac Environm Sci & Econ, Dept Comp Sci, Exeter EX4 4QF, England
基金
中国国家自然科学基金;
关键词
Task analysis; Mobile applications; Servers; Cloud computing; Real-time systems; Computational modeling; Heuristic algorithms; Cloud-edge computing; deep reinforcement learning; dependent and parallel tasks; real-time offloading; WORKFLOW;
D O I
10.1109/TPDS.2023.3349177
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
As an effective technique to relieve the problem of resource constraints on mobile devices (MDs), the computation offloading utilizes powerful cloud and edge resources to process the computation-intensive tasks of mobile applications uploaded from MDs. In cloud-edge computing, the resources (e.g., cloud and edge servers) that can be accessed by mobile applications may change dynamically. Meanwhile, the parallel tasks in mobile applications may lead to the huge solution space of offloading decisions. Therefore, it is challenging to determine proper offloading plans in response to such high dynamics and complexity in cloud-edge environments. The existing studies often preset the priority of parallel tasks to simplify the solution space of offloading decisions, and thus the proper offloading plans cannot be found in many cases. To address this challenge, we propose a novel real-time and Dependency-aware task Offloading method with Deep Q-networks (DODQ) in cloud-edge computing. In DODQ, mobile applications are first modeled as Directed Acyclic Graphs (DAGs). Next, the Deep Q-Networks (DQN) is customized to train the decision-making model of task offloading, aiming to quickly complete the decision-making process and generate new offloading plans when the environments change, which considers the parallelism of tasks without presetting the task priority when scheduling tasks. Simulation results show that the DODQ can well adapt to different environments and efficiently make offloading decisions. Moreover, the DODQ outperforms the state-of-art methods and quickly reaches the optimal/near-optimal performance.
引用
收藏
页码:391 / 404
页数:14
相关论文
共 50 条
  • [41] Multi-resource interleaving for task scheduling in cloud-edge system by deep reinforcement learning
    Pei, Xinglong
    Sun, Penghao
    Hu, Yuxiang
    Li, Dan
    Tian, Le
    Li, Ziyong
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 160 : 522 - 536
  • [42] Learning-aided fine grained offloading for real-time applications in edge-cloud computing
    Huang, Qihe
    Xu, Xiaolong
    Chen, Jinhui
    WIRELESS NETWORKS, 2024, 30 (05) : 3805 - 3820
  • [43] Federated deep reinforcement learning for dynamic job scheduling in cloud-edge collaborative manufacturing systems
    Wang, Xiaohan
    Zhang, Lin
    Wang, Lihui
    Wang, Xi Vincent
    Liu, Yongkui
    INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH, 2024, 62 (21) : 7743 - 7762
  • [44] Cloud-edge collaboration task scheduling in cloud manufacturing: An attention-based deep reinforcement learning approach
    Chen, Zhen
    Zhang, Lin
    Wang, Xiaohan
    Wang, Kunyu
    COMPUTERS & INDUSTRIAL ENGINEERING, 2023, 177
  • [45] Real-time security margin control using deep reinforcement learning
    Hagmar, Hannes
    Eriksson, Robert
    Tuan, Le Anh
    ENERGY AND AI, 2023, 13
  • [46] Real-Time Energy Management of a Microgrid Using Deep Reinforcement Learning
    Ji, Ying
    Wang, Jianhui
    Xu, Jiacan
    Fang, Xiaoke
    Zhang, Huaguang
    ENERGIES, 2019, 12 (12)
  • [47] Enhanced multi-objective gorilla troops optimizer for real-time multi-user dependent tasks offloading in edge-cloud computing
    Hosny, Khalid M.
    Awad, Ahmed I.
    Khashaba, Marwa M.
    Fouda, Mostafa M.
    Guizani, Mohsen
    Mohamed, Ehab R.
    JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2023, 218
  • [48] Deep Reinforcement Learning for Dynamic Task Scheduling in Edge-Cloud Environments
    Rani, D. Mamatha
    Supreethi, K. P.
    Jayasingh, Bipin Bihari
    INTERNATIONAL JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING SYSTEMS, 2024, 15 (10) : 837 - 850
  • [49] Machine-Learning-Based Real-Time Economic Dispatch in Islanding Microgrids in a Cloud-Edge Computing Environment
    Dong, Wei
    Yang, Qiang
    Li, Wei
    Zomaya, Albert Y.
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (17): : 13703 - 13711
  • [50] Task offloading and resource allocation algorithm based on deep reinforcement learning for distributed AI execution tasks in IoT edge computing environments
    Aghapour, Zahra
    Sharifian, Saeed
    Taheri, Hassan
    COMPUTER NETWORKS, 2023, 223