Task Scheduling in Cloud Using Deep Reinforcement Learning

被引:36
作者
Swarup, Shashank [1 ]
Shakshuki, Elhadi M. [1 ]
Yasar, Ansar [2 ]
机构
[1] Acadia Univ, Jodrey Sch Comp Sci, Wolfville, NS B4P 2R6, Canada
[2] Hasselt Univ, Transportat Res Inst, B-3500 Hasselt, Belgium
来源
12TH INTERNATIONAL CONFERENCE ON AMBIENT SYSTEMS, NETWORKS AND TECHNOLOGIES (ANT) / THE 4TH INTERNATIONAL CONFERENCE ON EMERGING DATA AND INDUSTRY 4.0 (EDI40) / AFFILIATED WORKSHOPS | 2021年 / 184卷
关键词
task scheduling; computational cost; energy consumption; deep reinforcement learning; Clipped Double Deep Q-learning (CDDQL);
D O I
10.1016/j.procs.2021.03.016
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Cloud computing is an emerging technology used in many applications such as data analysis, storage, and Internet of Things (IoT). Due to the increasing number of users in the cloud and the IoT devices that are being integrated with the cloud, the amount of data generated by these users and these devices is increasing ceaselessly. Managing this data over the cloud is no longer an easy task. It is almost impossible to move all data to the cloud datacenters, and this will lead to excessive bandwidth usage, latency, cost, and energy consumption. This makes it evident that allocating resources to users' tasks is an essential quality feature in cloud computing. This is because it provides the customers or the users with high Quality of Service (Qo S) with the best response time, and it also respects the established Service Level Agreement. Therefore, there is a great importance of efficient utilization of computing resources for which an optimal strategy for task scheduling is required. This paper focuses on the problem of task scheduling of cloud-based applications and aims to minimize the computational cost under resource and deadline constraints. Towards this end, we propose a clipped double deep Q-learning algorithm utilizing the target network and experience relay techniques, as we as using the reinforcement learning approach. (C) 2021 The Authors. Published by Elsevier B.V.
引用
收藏
页码:42 / 51
页数:10
相关论文
共 19 条
[1]   CloudSim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms [J].
Calheiros, Rodrigo N. ;
Ranjan, Rajiv ;
Beloglazov, Anton ;
De Rose, Cesar A. F. ;
Buyya, Rajkumar .
SOFTWARE-PRACTICE & EXPERIENCE, 2011, 41 (01) :23-50
[2]   Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning [J].
Chen, Xianfu ;
Zhang, Honggang ;
Wu, Celimuge ;
Mao, Shiwen ;
Ji, Yusheng ;
Bennis, Mehdi .
IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (03) :4005-4018
[3]   A Reinforcement Learning-Based Mixed Job Scheduler Scheme for Grid or IaaS Cloud [J].
Cui, Delong ;
Peng, Zhiping ;
Xiong, Jianbin ;
Xu, Bo ;
Lin, Weiwei .
IEEE TRANSACTIONS ON CLOUD COMPUTING, 2020, 8 (04) :1030-1039
[4]  
Davide Arcelli, 2021, J UBIQUITOUS SYSTEMS, V14, P27
[5]   Task scheduling based on deep reinforcement learning in a cloud manufacturing environment [J].
Dong, Tingting ;
Xue, Fei ;
Xiao, Chuangbai ;
Li, Juntao .
CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2020, 32 (11)
[6]  
Elarbi B, 2020, J UBIQUITOUS SYST PE, V13, P11, DOI DOI 10.5383/JUSPN.13.01.002
[7]  
Fujimoto Scott, 2018, PMLR 80
[8]  
Gao J., DEEPMIND REDUCES GOO
[9]   Saving time and cost on the scheduling of fog-based IoT applications using deep reinforcement learning approach [J].
Gazori, Pegah ;
Rahbari, Dadmehr ;
Nickray, Mohsen .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2020, 110 :1098-1115
[10]  
Hado van Hasselt, 2015, ARXIV150906461V3 ARXIV150906461V3