Deep Reinforcement Learning Task Scheduling Method for Real-Time Performance Awareness

被引:0
|
作者
Wang, Jinming [1 ]
Li, Shaobo [1 ]
Zhang, Xingxing [1 ,2 ]
Zhu, Keyu [1 ]
Xie, Cankun [1 ]
Wu, Fengbin [1 ]
机构
[1] Guizhou Univ, State Key Lab Publ Big Data, Guiyang 550025, Guizhou, Peoples R China
[2] Natl Univ Singapore, Dept Elect & Comp Engn, Singapore 119077, Singapore
来源
IEEE ACCESS | 2025年 / 13卷
基金
中国国家自然科学基金;
关键词
Dynamic scheduling; Cloud computing; Heuristic algorithms; Scheduling; Load management; Real-time systems; Time factors; Stochastic processes; Servers; Load modeling; Task scheduling; load performance fluctuation; deep reinforcement learning; load balancing;
D O I
10.1109/ACCESS.2025.3534980
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Load balancing is essential for the efficient delivery of cloud computing services, ensuring stable operation and robust performance under high load conditions. However, existing load-balancing task scheduling algorithms struggle to adapt to load performance fluctuations in real-time, leading to inaccuracies in evaluating task execution efficiency and consequently impacting the quality of service in actual cloud task scheduling. To address this issue, we propose a real-time performance-aware task scheduling method based on the Soft Actor-Critic (RTPA-SAC) algorithm. This method dynamically detects server load performance changes in real-time, enhancing environmental consistency and adaptability in stochastic, dynamic task scheduling, thereby improving load balancing. First, we construct a bounded load performance loss function to evaluate task execution efficiency, considering the impact of parallel task interference. Next, a reward mechanism is introduced, which takes into account both load fluctuations and response times, optimizing task load variance within quality of service constraints to minimize response time. Finally, By leveraging the Soft Actor-Critic algorithm, the proposed scheduling strategy enhances exploratory and stable decision-making in task scheduling. Experimental results show that RTPA-SAC outperforms baseline methods in load balancing, evidenced by improvements in task response time, average task load variance, and task success rate.
引用
收藏
页码:31385 / 31400
页数:16
相关论文
共 50 条
  • [1] Deep reinforcement learning task scheduling method based on server real-time performance
    Wang, Jinming
    Li, Shaobo
    Zhang, Xingxing
    Wu, Fengbin
    Xie, Cankun
    PEERJ COMPUTER SCIENCE, 2024, 10
  • [2] Developing Real-Time Scheduling Policy by Deep Reinforcement Learning
    Bo, Zitong
    Qiao, Ying
    Leng, Chang
    Wang, Hongan
    Guo, Chaoping
    Zhang, Shaohui
    2021 IEEE 27TH REAL-TIME AND EMBEDDED TECHNOLOGY AND APPLICATIONS SYMPOSIUM (RTAS 2021), 2021, : 131 - 142
  • [3] Energy-efficient Real-time DAG Task Scheduling on Multicore Platform by Deep Reinforcement Learning
    Peng, Chenhua
    Wang, Mufeng
    Liu, Ji
    Mo, Lei
    Niu, Dan
    2024 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA, ICCC, 2024,
  • [4] Distributed Real-Time Scheduling in Cloud Manufacturing by Deep Reinforcement Learning
    Zhang, Lixiang
    Yang, Chen
    Yan, Yan
    Hu, Yaoguang
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (12) : 8999 - 9007
  • [5] Application of Deep Reinforcement Learning in Real-time Plan Scheduling of Power Grid
    Liu J.
    Song X.
    Yang N.
    Wan X.
    Cai Y.
    Huang Y.
    Dianli Xitong Zidonghua/Automation of Electric Power Systems, 2023, 47 (14): : 157 - 166
  • [6] An FRTDS Real-Time Simulation Optimized Task Scheduling Algorithm Based on Reinforcement Learning
    Guan, Y.
    Zhang, B. D.
    Jin, Z.
    IEEE ACCESS, 2020, 8 (08): : 155797 - 155810
  • [7] TTDeep: Time-Triggered Scheduling for Real-Time Ethernet via Deep Reinforcement Learning
    Jia, Hongyu
    Jiang, Yu
    Zhong, Chunmeng
    Wan, Hai
    Zhao, Xibin
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [8] Real-time dynamic scheduling for garment sewing process based on deep reinforcement learning
    Liu F.
    Xu J.
    Ke W.
    Fangzhi Xuebao/Journal of Textile Research, 2022, 43 (09): : 41 - 48
  • [9] Real-time optimal scheduling for microgrid systems based on distributed deep reinforcement learning
    Guo F.-H.
    He T.
    Wu X.
    Dong H.
    Liu B.
    Kongzhi Lilun Yu Yingyong/Control Theory and Applications, 2022, 39 (10): : 1881 - 1889
  • [10] REAL-TIME SCHEDULING BASED ON SIMULATION AND DEEP REINFORCEMENT LEARNING WITH FEATURED ACTION SPACE
    Xie, Shufang
    Zhang, Tao
    Rose, Oliver
    2022 WINTER SIMULATION CONFERENCE (WSC), 2022, : 1731 - 1739