Accelerate GPU Concurrent Kernel Execution by Mitigating Memory Pipeline Stalls

被引:25
作者
Dai, Hongwen [1 ]
Lin, Zhen [1 ]
Li, Chao [1 ]
Zhao, Chen [2 ]
Wang, Fei [2 ]
Zheng, Nanning [2 ]
Zhou, Huiyang [1 ]
机构
[1] North Carolina State Univ, Dept Elect & Comp Engn, Raleigh, NC 27695 USA
[2] Xi An Jiao Tong Univ, Sch Elect & Informat Engn, Xian, Shaanxi, Peoples R China
来源
2018 24TH IEEE INTERNATIONAL SYMPOSIUM ON HIGH PERFORMANCE COMPUTER ARCHITECTURE (HPCA) | 2018年
关键词
HIGH-PERFORMANCE; CACHE;
D O I
10.1109/HPCA.2018.00027
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Following the advances in technology scaling, graphics processing units (GPUs) incorporate an increasing amount of computing resources and it becomes difficult for a single GPU kernel to fully utilize the vast GPU resources. One solution to improve resource utilization is concurrent kernel execution (CKE). Early CKE mainly targets the leftover resources. However, it fails to optimize the resource utilization and does not provide fairness among concurrent kernels. Spatial multitasking assigns a subset of streaming multiprocessors (SMs) to each kernel. Although achieving better fairness, the resource underutilization within an SM is not addressed. Thus, intra-SM sharing has been proposed to issue thread blocks from different kernels to each SM. However, as shown in this study, the overall performance may be undermined in the intra-SM sharing schemes due to the severe interference among kernels. Specifically, as concurrent kernels share the memory subsystem, one kernel, even as computing-intensive, may starve from not being able to issue memory instructions in time. Besides, severe L1 D-cache thrashing and memory pipeline stalls caused by one kernel, especially a memory-intensive one, will impact other kernels, further hurting the overall performance. In this study, we investigate various approaches to overcome the aforementioned problems exposed in intra-SM sharing. We first highlight that cache partitioning techniques proposed for CPUs are not effective for GPUs. Then we propose two approaches to reduce memory pipeline stalls. The first is to balance memory accesses of concurrent kernels. The second is to limit the number of inflight memory instructions issued from individual kernels. Our evaluation shows that the proposed schemes significantly improve the weighted speedup of two state-of-the-art intra-SM sharing schemes, Warped-Slicer and SMK, by 24.6% and 27.2% on average, respectively, with lightweight hardware overhead.
引用
收藏
页码:208 / 220
页数:13
相关论文
共 48 条
[1]  
Adriaens Jacob T., 2012, IEEE INT S HIGH PERF, P1
[2]  
Aguilera P, 2014, PR IEEE COMP DESIGN, P440, DOI 10.1109/ICCD.2014.6974717
[3]  
[Anonymous], TECH REP
[4]  
[Anonymous], 2008, NVIDIA CUDA: Compute Unified Device Architecture, Programming Guide
[5]  
[Anonymous], TECH REP
[6]  
[Anonymous], 2013 IEEE 31 INT C C
[7]  
[Anonymous], P 41 ANN INT S COMP
[8]  
[Anonymous], 2012, AMD GCN Architecture White paper
[9]  
[Anonymous], TECH REP
[10]  
[Anonymous], 2012, CTR RELIABLE HIGH PE