On the benefits of the remote GPU virtualization mechanism: The rCUDA case

被引:25
作者
Silla, Federico [1 ]
Iserte, Sergio [2 ]
Reano, Carlos [1 ]
Prades, Javier [1 ]
机构
[1] Univ Politecn Valencia, Valencia, Spain
[2] Univ Jaume 1, Castellon De La Plana, Castellon, Spain
关键词
CUDA; GPU migration; GPU virtualization; InfiniBand; Slurm; Xen; MOLECULAR-DYNAMICS;
D O I
10.1002/cpe.4072
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Graphics processing units (GPUs) are being adopted in many computing facilities given their extraordinary computing power, which makes it possible to accelerate many general purpose applications from different domains. However, GPUs also present several side effects, such as increased acquisition costs as well as larger space requirements. They also require more powerful energy supplies. Furthermore, GPUs still consume some amount of energy while idle, and their utilization is usually low for most workloads. In a similar way to virtual machines, the use of virtual GPUs may address the aforementioned concerns. In this regard, the remote GPU virtualization mechanism allows an application being executed in a node of the cluster to transparently use the GPUs installed at other nodes. Moreover, this technique allows to share the GPUs present in the computing facility among the applications being executed in the cluster. In this way, several applications being executed in different (or the same) cluster nodes can share 1 or more GPUs located in other nodes of the cluster. Sharing GPUs should increase overall GPU utilization, thus reducing the negative impact of the side effects mentioned before. Reducing the total amount of GPUs installed in the cluster may also be possible. In this paper, we explore some of the benefits that remote GPU virtualization brings to clusters. For instance, this mechanism allows an application to use all the GPUs present in the computing facility. Another benefit of this technique is that cluster throughput, measured as jobs completed per time unit, is noticeably increased when this technique is used. In this regard, cluster throughput can be doubled for some workloads. Furthermore, in addition to increase overall GPU utilization, total energy consumption can be reduced up to 40%. This may be key in the context of exascale computing facilities, which present an important energy constraint. Other benefits are related to the cloud computing domain, where a GPU can be easily shared among several virtual machines. Finally, GPU migration (and therefore server consolidation) is one more benefit of this novel technique.
引用
收藏
页数:17
相关论文
共 37 条
[1]   Performance modeling of microsecond scale biological molecular dynamics simulations on heterogeneous architectures [J].
Agarwal, Pratul K. ;
Hampton, Scott ;
Poznanovic, Jeffrey ;
Ramanthan, Arvind ;
Alam, Sadaf R. ;
Crozier, Paul S. .
CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2013, 25 (10) :1356-1375
[2]  
[Anonymous], 2016, CUDA C PROGRAMMING G
[3]  
[Anonymous], P 2009 INT C PAR DIS
[4]  
[Anonymous], 2009, Proceedings of the 3rd ACM Workshop on System-Level Virtualization for High Performance Computing, HPCVirt'09, page, DOI DOI 10.1145/1519138.1519141
[5]   Implementing molecular dynamics on hybrid high performance computers - Particle-particle particle-mesh [J].
Brown, W. Michael ;
Kohlmeyer, Axel ;
Plimpton, Steven J. ;
Tharrington, Arnold N. .
COMPUTER PHYSICS COMMUNICATIONS, 2012, 183 (03) :449-459
[6]   LIBSVM: A Library for Support Vector Machines [J].
Chang, Chih-Chung ;
Lin, Chih-Jen .
ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2011, 2 (03)
[7]  
Chao-Tung Yang, 2012, 2012 IEEE 4th International Conference on Cloud Computing Technology and Science (CloudCom). Proceedings, P711, DOI 10.1109/CloudCom.2012.6427531
[8]  
CUDA, 2016, CUDA API REFERENCE M
[9]  
Giunta G, 2010, EURO PAR 2010 PARALL
[10]  
Iserte Sergio, 2014, 2014 IEEE 26th International Symposium on Computer Architecture and High-Performance Computing (SBAC-PAD), P318, DOI 10.1109/SBAC-PAD.2014.49