Diminishing Returns and Deep Learning for Adaptive CPU Resource Allocation of Containers

被引:20
作者
Abdullah, Muhammad [1 ]
Iqbal, Waheed [1 ]
Bukhari, Faisal [1 ]
Erradi, Abdelkarim [2 ]
机构
[1] Univ Punjab, Punjab Univ Coll Informat Technol, Lahore 54590, Pakistan
[2] Qatar Univ, Coll Engn, Dept Comp Sci & Engn, Doha, Qatar
来源
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT | 2020年 / 17卷 / 04期
关键词
Pins; Resource management; Containers; Data centers; Scheduling algorithms; Cloud computing; Performance gain; CPU pin; job completion time; CPU allocation; diminishing marginal; MANAGEMENT;
D O I
10.1109/TNSM.2020.3033025
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Containers provide a lightweight runtime environment for microservices applications while enabling better server utilization. Automatic optimal allocation of CPU pins to the containers serving specific workloads can help to minimize the completion time of jobs. Most of the existing state-of-the-art focused on building new efficient scheduling algorithms for placing the containers on the infrastructure, and the resources to the containers are allocated manually and statically. An automatic method to identify and allocate optimal CPU resources to the containers can help to improve the efficiency of the scheduling algorithms. In this article, we introduce a new deep learning-based approach to allocate optimal CPU resources to the containers automatically. Our approach uses the law of diminishing marginal returns to determine the optimal number of CPU pins for containers to gain maximum performance while maximizing the number of concurrent jobs. The proposed method is evaluated using real workloads on a Docker-based containerized infrastructure. The results demonstrate the effectiveness of the proposed solution in reducing the completion time of the jobs by 23% to 74% compared to commonly used static CPU allocation methods.
引用
收藏
页码:2052 / 2063
页数:12
相关论文
共 33 条
[21]   An effective task scheduling algorithm based on dynamic energy management and efficient resource utilization in green cloud computing environment [J].
Lu, Yong ;
Sun, Na .
CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2019, 22 (Suppl 1) :513-520
[22]  
Mills C., 2017, Journal of Financial Compliance, V1, P17, DOI DOI 10.1080/17476348.2016.1240037
[23]  
Ogawa K, 2019, INT CONF PERVAS COMP, P419, DOI [10.1109/PERCOMW.2019.8730806, 10.1109/percomw.2019.8730806]
[24]   Optimus: An Efficient Dynamic Resource Scheduler for Deep Learning Clusters [J].
Peng, Yanghua ;
Bao, Yixin ;
Chen, Yangrui ;
Wu, Chuan ;
Guo, Chuanxiong .
EUROSYS '18: PROCEEDINGS OF THE THIRTEENTH EUROSYS CONFERENCE, 2018,
[25]   Analyzing the Impact of CPU Pinning and Partial CPU Loads on Performance and Energy Efficiency [J].
Podzimek, Andrej ;
Bulej, Lubomir ;
Chen, Lydia Y. ;
Binder, Walter ;
Tuma, Petr .
2015 15TH IEEE/ACM INTERNATIONAL SYMPOSIUM ON CLUSTER, CLOUD AND GRID COMPUTING, 2015, :1-10
[26]   A Survey of the State-of-the-Art in Fair Multi-Resource Allocations for Data Centers [J].
Poullie, Patrick ;
Bocek, Thomas ;
Stiller, Burkhard .
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2018, 15 (01) :169-183
[27]   Deterministic Container Resource Management in Derivative Clouds [J].
Prakash, Chandra ;
Prashanth, Prashanth ;
Bellur, Umesh ;
Kulkarni, Purushottam .
2018 IEEE INTERNATIONAL CONFERENCE ON CLOUD ENGINEERING (IC2E 2018), 2018, :79-89
[28]   Data-driven smart manufacturing [J].
Tao, Fei ;
Qi, Qinglin ;
Liu, Ang ;
Kusiak, Andrew .
JOURNAL OF MANUFACTURING SYSTEMS, 2018, 48 :157-169
[29]   A Lightweight Autoscaling Mechanism for Fog Computing in Industrial Applications [J].
Tseng, Fan-Hsun ;
Tsai, Ming-Shiun ;
Tseng, Chia-Wei ;
Yang, Yao-Tsung ;
Liu, Chien-Chang ;
Chou, Li-Der .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2018, 14 (10) :4529-4537
[30]   Natural language based financial forecasting: a survey [J].
Xing, Frank Z. ;
Cambria, Erik ;
Welsch, Roy E. .
ARTIFICIAL INTELLIGENCE REVIEW, 2018, 50 (01) :49-73