Saving time and cost on the scheduling of fog-based IoT applications using deep reinforcement learning approach

被引:81
作者
Gazori, Pegah [1 ]
Rahbari, Dadmehr [1 ]
Nickray, Mohsen [1 ]
机构
[1] Univ Qom, Dept Comp Engn & Informat Technol, Alghadir Ave,POB 3716146611, Qom, Iran
来源
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE | 2020年 / 110卷
关键词
Fog computing; Task scheduling; Deep reinforcement learning; Double Q-Learning; Service delay; Computation cost; MANAGEMENT;
D O I
10.1016/j.future.2019.09.060
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Due to the rapid growth of intelligent devices and the Internet of Things (IoT) applications in recent years, the volume of data that is generated by these devices is increasing ceaselessly. Hence, moving all of these data to cloud datacenters would be impossible and would lead to more bandwidth usage, latency, cost, and energy consumption. In such cases, the fog layer would be the best place for data processing. In the fog layer, the computing equipment dedicates parts of its limited resources to process the IoT application tasks. Therefore, efficient utilization of computing resources is of great importance and requires an optimal and intelligent strategy for task scheduling. In this paper, we have focused on the task scheduling of fog-based IoT applications with the aim of minimizing long-term service delay and computation cost under the resource and deadline constraints. To address this problem, we have used the reinforcement learning approach and have proposed a Double Deep Q-Learning (DDQL)-based scheduling algorithm using the target network and experience replay techniques. The evaluation results reveal that our proposed algorithm outperforms some baseline algorithms in terms of service delay, computation cost, energy consumption and task accomplishment and also handles the Single Point of Failure (SPoF) and load balancing challenges. (C) 2019 Elsevier B.V. All rights reserved.
引用
收藏
页码:1098 / 1115
页数:18
相关论文
共 62 条
[1]  
Akhtar E., 2017, PRACTICAL REINFORCEM
[2]  
[Anonymous], 2015, INT THINGS EXT CLOUD
[3]  
[Anonymous], 2016, Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 20152020
[4]  
[Anonymous], 2016, Internet of Things, DOI [10.1016/B978 -0-12-805395-9.00004-6. arXiv: 1601.02752, DOI 10.1016/B978-0-12-805395-9.00004-6.ARXIV:1601.02752]
[5]  
[Anonymous], 2018, CoRR
[6]  
[Anonymous], 2014, Big Data and Internet of Things
[7]   A comprehensive survey on machine learning for networking: evolution, applications and research opportunities [J].
Boutaba, Raouf ;
Salahuddin, Mohammad A. ;
Limam, Noura ;
Ayoubi, Sara ;
Shahriar, Nashid ;
Estrada-Solano, Felipe ;
Caicedo, Oscar M. .
JOURNAL OF INTERNET SERVICES AND APPLICATIONS, 2018, 9 (01)
[8]  
C. Networking, 2018, CISC GLOB CLOUD IND
[9]   Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning [J].
Chen, Xianfu ;
Zhang, Honggang ;
Wu, Celimuge ;
Mao, Shiwen ;
Ji, Yusheng ;
Bennis, Mehdi .
IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (03) :4005-4018
[10]  
Chollet F., 2018, Keras: The Python Deep Learning library