Adaptive Cache Management for Energy-efficient GPU Computing

被引:96
作者
Chen, Xuhao [1 ,2 ,3 ]
Chang, Li-Wen [3 ]
Rodrigues, Christopher I. [3 ]
Lv, Jie [3 ]
Wang, Zhiying [1 ,2 ]
Hwu, Wen-Mei [3 ]
机构
[1] Natl Univ Def Technol, State Key Lab High Performance Comp, Changsha, Hunan, Peoples R China
[2] Natl Univ Def Technol, Sch Comp, Changsha, Hunan, Peoples R China
[3] Univ Illinois, Dept Elect & Comp Engn, Urbana, IL USA
来源
2014 47TH ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO) | 2014年
关键词
GPGPU; cache management; bypass; warp throttling; REPLACEMENT;
D O I
10.1109/MICRO.2014.11
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
With the SIMT execution model, GPUs can hide memory latency through massive multithreading for many applications that have regular memory access patterns. To support applications with irregular memory access patterns, cache hierarchies have been introduced to GPU architectures to capture temporal and spatial locality and mitigate the effect of irregular accesses. However, GPU caches exhibit poor efficiency due to the mismatch of the throughput-oriented execution model and its cache hierarchy design, which limits system performance and energy-efficiency. The massive amount of memory requests generated by GPUs cause cache contention and resource congestion. Existing CPU cache management policies that are designed for multicore systems, can be suboptimal when directly applied to GPU caches. We propose a specialized cache management policy for GPGPUs. The cache hierarchy is protected from contention by the bypass policy based on reuse distance. Contention and resource congestion are detected at runtime. To avoid over-saturating on-chip resources, the bypass policy is coordinated with warp throttling to dynamically control the active number of warps. We also propose a simple predictor to dynamically estimate the optimal number of active warps that can take full advantage of the cache space and on-chip resources. Experimental results show that cache efficiency is significantly improved and on-chip resources are better utilized for cache-sensitive benchmarks. This results in a harmonic mean IPC improvement of 74% and 17% (maximum 661% and 44% IPC improvement), compared to the baseline GPU architecture and optimal static warp throttling, respectively.
引用
收藏
页码:343 / 355
页数:13
相关论文
共 44 条
[1]  
[Anonymous], 2010, P 37 ANN INT S COMP
[2]  
[Anonymous], 2013, SIGARCH Comput. Archit. News, DOI [DOI 10.1145/2508148.2485964, 10.1145/2508148.2485964, DOI 10.1145/2485922]
[3]  
[Anonymous], P 5 ANN WORKSH GEN P
[4]  
[Anonymous], 2008, P 17 INT C PAR ARCH
[5]  
[Anonymous], 2013, OPENCL C SPEC VERS 2
[6]  
[Anonymous], 2012, AMD GRAPH COR NET GC
[7]  
[Anonymous], 2012, CTR RELIABLE HIGH PE
[8]  
[Anonymous], 2011, CUDA C C SDK COD SAM
[9]  
[Anonymous], P 20 INT S HIGH PERF
[10]  
[Anonymous], 2012, Nvidias next generation cuda compute architecture: Kepler gk110