Orchestrating Cache Management and Memory Scheduling for GPGPU Applications

被引:16
作者
Mu, Shuai [1 ]
Deng, Yandong [1 ]
Chen, Yubei [1 ]
Li, Huaiming [1 ]
Pan, Jianming [1 ]
Zhang, Wenjun [1 ]
Wang, Zhihua [1 ]
机构
[1] Inst Microelect Circuit & Syst, Beijing 100015, Peoples R China
关键词
Cache management; general purpose computing on graphics processing units (GPGPU); memory latency divergence; memory scheduling; priority; warp; OPTIMIZATION; PERFORMANCE;
D O I
10.1109/TVLSI.2013.2278025
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Modern graphics processing units (GPUs) are delivering tremendous computing horsepower by running tens of thousands of threads concurrently. The massively parallel execution model has been effective to hide the long latency of off-chip memory accesses in graphics and other general computing applications exhibiting regular memory behaviors. With the fast-growing demand for general purpose computing on GPUs (GPGPU), GPU workloads are becoming highly diversified, and thus requiring a synergistic coordination of both computing and memory resources to unleash the computing power of GPUs. Accordingly, recent graphics processors begin to integrate an on-die level-2 (L2) cache. The huge number of threads on GPUs, however, poses significant challenges to L2 cache design. The experiments on a variety of GPGPU applications reveal that the L2 cache may or may not improve the overall performance depending on the characteristics of applications. In this paper, we propose efficient techniques to improve GPGPU performance by orchestrating both L2 cache and memory in a unified framework. The basic philosophy is to exploit the temporal locality among the massive number of concurrent memory requests and minimize the impact of memory divergence behaviors among simultaneously executed groups of threads. Our major contributions are twofold. First, a priority-based cache management is proposed to maximize the chance of frequently revisited data to be kept in the cache. Second, an effective memory scheduling is introduced to reorder memory requests in the memory controller according to the divergence behavior for reducing average waiting time of warps. Simulation results reveal that our techniques enhance the overall performance by 10% on average for memory intensive benchmarks, whereas the maximum gain can be up to 30%.
引用
收藏
页码:1803 / 1814
页数:12
相关论文
共 36 条
[1]  
[Anonymous], CUDA PROGR GUID 5 0
[2]  
[Anonymous], 2007, Memory Systems: Cache, DRAM, Disk
[3]  
[Anonymous], 2011, OPENCL PROGR GUID
[4]  
[Anonymous], 2016, Programming massively parallel processors: a hands-on approach
[5]  
[Anonymous], 2012, Nvidias next generation cuda compute architecture: Kepler gk110
[6]  
Ausavarungnirun R, 2012, CONF PROC INT SYMP C, P416, DOI 10.1109/ISCA.2012.6237036
[7]  
Bakhoda A, 2009, INT SYM PERFORM ANAL, P163, DOI 10.1109/ISPASS.2009.4919648
[8]  
Baskaran MM, 2008, ICS'08: PROCEEDINGS OF THE 2008 ACM INTERNATIONAL CONFERENCE ON SUPERCOMPUTING, P225
[9]  
Burtscher M., 2012, 2012 IEEE International Symposium on Workload Characterization (IISWC 2012), P141, DOI 10.1109/IISWC.2012.6402918
[10]  
Che SA, 2009, I S WORKL CHAR PROC, P44, DOI 10.1109/IISWC.2009.5306797