Providing Source Code Level Portability Between CPU and GPU with MapCG

被引:0
作者
洪春涛 [1 ]
陈德颢 [1 ]
陈羽北 [2 ]
陈文光 [1 ]
郑纬民 [1 ]
林海波 [3 ]
机构
[1] Department of Computer Science and Technology, Tsinghua University
[2] Department of Electronic Engineering, Tsinghua University
[3] IBM China Research Lab
关键词
D O I
暂无
中图分类号
TP332 [运算器和控制器(CPU)]; TP391.41 [];
学科分类号
081201 ; 080203 ;
摘要
Graphics processing units (GPU) have taken an important role in the general purpose computing market in recent years. At present, the common approach to programming GPU units is to write GPU specific code with low level GPU APIs such as CUDA. Although this approach can achieve good performance, it creates serious portability issues as programmers are required to write a specific version of the code for each potential target architecture. This results in high development and maintenance costs. We believe it is desirable to have a programming model which provides source code portability between CPUs and GPUs, as well as different GPUs. This would allow programmers to write one version of the code, which can be compiled and executed on either CPUs or GPUs efficiently without modification. In this paper, we propose MapCG, a MapReduce framework to provide source code level portability between CPUs and GPUs. In contrast to other approaches such as OpenCL, our framework, based on MapReduce, provides a high level programming model and makes programming much easier. We describe the design of MapCG, including the MapReduce-style high-level programming framework and the runtime system on the CPU and GPU. A prototype of the MapCG runtime, supporting multi-core CPUs and NVIDIA GPUs, was implemented. Our experimental results show that this implementation can execute the same source code efficiently on multi-core CPU platforms and GPUs, achieving an average speedup of 1.6~2.5x over previous implementations of MapReduce on eight commonly used applications.
引用
收藏
页码:42 / 56
页数:15
相关论文
共 1 条
[1]  
Using shared memory to accelerate MapReduce on graphics processing units..Ji F;Ma X S;.Proc. the 25th IPDPS.2011,