Optimizing and Auto-tuning Belief Propagation on the GPU

被引:6
作者
Grauer-Gray, Scott [1 ]
Cavazos, John [1 ]
机构
[1] Univ Delaware, Newark, DE 19716 USA
来源
LANGUAGES AND COMPILERS FOR PARALLEL COMPUTING | 2011年 / 6548卷
关键词
D O I
10.1007/978-3-642-19595-2_9
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
A CUDA kernel will utilize high-latency local memory for storage when there are not enough registers to hold the required data or if the data is an array that is accessed using a variable index within a loop. However, accesses from local memory take longer than accesses from registers and shared memory, so it is desirable to minimize the use of local memory. This paper contains an analysis of strategies used to reduce the use of local memory in a CUDA implementation of belief propagation for stereo processing. We perform experiments using registers as well as shared memory as alternate locations for data initially placed in local memory, and then develop a hybrid implementation that allows the programmer to store an adjustable amount of data in shared, register, and local memory. We show results of running our optimized implementations on two different stereo sets and across three generations of nVidia GPUs, and introduce an auto-tuning implementation that generates an optimized belief propagation implementation on any input stereo set on any CUDA-capable GPU.
引用
收藏
页码:121 / 135
页数:15
相关论文
共 19 条
[1]  
[Anonymous], NVIDIA CUDA PROGR GU
[2]  
[Anonymous], 2009, P C HIGH PERFORMANCE
[3]  
Brunton A., 2006, BELIEF PROPAGATION G, P76
[4]  
CALLAHAN D, 1990, SIGPLAN NOTICES, V25, P53, DOI 10.1145/93548.93553
[5]  
Cooper KD, 1997, ACM SIGPLAN NOTICES, V32, P308, DOI 10.1145/258916.258943
[6]  
Felzenszwalb PR, 2004, PROC CVPR IEEE, P261
[7]  
Grauer-Cray S., 2009, 2009 IEEE WORKSH APP
[8]  
Grauer-Gray S., 2008, GPU IMPLEMENTATION B, P1
[9]  
Ivanchenko V., 2009, 2009 IEEE WORKSH APP
[10]  
Li YN, 2009, LECT NOTES COMPUT SC, V5544, P884