Liquid: Intelligent Resource Estimation and Network-Efficient Scheduling for Deep Learning Jobs on Distributed GPU Clusters

被引:51
作者
Gu, Rong [1 ]
Chen, Yuquan [1 ]
Liu, Shuai [1 ]
Dai, Haipeng [1 ]
Chen, Guihai
Zhang, Kai [2 ]
Che, Yang [1 ,2 ]
Huang, Yihua [1 ]
机构
[1] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing 210023, Jiangsu, Peoples R China
[2] Alibaba Grp, Hangzhou 311121, Zhejiang, Peoples R China
基金
美国国家科学基金会;
关键词
Graphics processing units; Processor scheduling; Resource management; Estimation; Liquids; Optimization; Training; Job scheduling; resource management; deep learning; GPU clusters;
D O I
10.1109/TPDS.2021.3138825
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Deep learning (DL) is becoming increasingly popular in many domains, including computer vision, speech recognition, self-driving automobiles, etc. GPU can train DL models efficiently but is expensive, which motivates users to share GPU resource to reduce money costs in practice. To ensure efficient sharing among multiple users, it is necessary to develop efficient GPU resource management and scheduling solutions. However, existing ones have several shortcomings. First, they require the users to specify the job resource requirement which is usually quite inaccurate and leads to cluster resource underutilization. Second, when scheduling DL jobs, they rarely take the cluster network characteristics into consideration, resulting in low job execution performance. To overcome the above issues, we propose Liquid, an efficient GPU resource management platform for DL jobs with intelligent resource requirement estimation and scheduling. First, we propose a regression model based method for job resource requirement estimation to avoid users over-allocating computing resources. Second, we propose intelligent cluster network-efficient scheduling methods in both immediate and batch modes based on the above resource requirement estimation techniques. Third, we further propose three system-level optimizations, including pre-scheduling data transmission, fine-grained GPU sharing, and event-driven communication. Experimental results show that our Liquid can accelerate the job execution speed by 18% on average and shorten the average job completion time (JCT) by 21% compared with cutting-edge solutions. Moreover, the proposed optimization methods are effective in various scenarios.
引用
收藏
页码:2808 / 2820
页数:13
相关论文
共 37 条
  • [1] [Anonymous], 2006, Tech. Rep.
  • [2] Bai ZH, 2020, PROCEEDINGS OF THE 14TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION (OSDI '20), P499
  • [3] Bao YX, 2019, IEEE INFOCOM SER, P505, DOI [10.1109/INFOCOM.2019.8737460, 10.1109/infocom.2019.8737460]
  • [4] Bergstra J., 2010, P 9 PYTH SCI COMP C, P1
  • [5] Random forests
    Breiman, L
    [J]. MACHINE LEARNING, 2001, 45 (01) : 5 - 32
  • [6] cs.toronto, 2009, CIFAR 10
  • [7] Falkenauer E., 1996, Journal of Heuristics, V2, P5, DOI 10.1007/BF00226291
  • [8] files.grouplens, 2015, ML 20M
  • [9] github, 2021, BRINGING HPC TECHNIQ
  • [10] github, 2017, OPENPAI