Algorithms to Take Advantage of Hardware Prefetching

被引:0
|
作者
Pan, Shen [1 ]
Cherng, Cary [2 ]
Dick, Kevin [3 ]
Ladner, Richard E. [4 ]
机构
[1] Amazon, 605 5th Ave S, Seattle, WA 98104 USA
[2] Google Kirkland, Kirkland, WA 98003 USA
[3] CALTECH, Comp Sci Dept, Pasadena, CA 25680 USA
[4] Univ Washington, Dept Comp Sci, Seattle, WA 98195 USA
来源
PROCEEDINGS OF THE NINTH WORKSHOP ON ALGORITHM ENGINEERING AND EXPERIMENTS AND THE FOURTH WORKSHOP ON ANALYTIC ALGORITHMICS AND COMBINATORICS | 2007年
关键词
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Cache-oblivious and cache-aware algorithms have been developed to minimize cache misses. Some of the newest processors have hardware prefetching where cache misses are avoided by predicting ahead of time what memory will be needed in the future and bringing that memory into the cache before it 13 used. It is shown that hardware prefetching permits the standard Floyd-Warshall algorithm for all-pairs shortest paths to outperform cache-oblivious and cache-aware algorithms. A simple improvement to the standard simple dynamic programming algorithm yields an algorithm that takes advantage of prefetching, and outperforms cache-oblivious and cache-aware algorithms. Finally, it is shown that variants of standard FFT algorithms exhibit good prefetching performance.
引用
收藏
页码:91 / +
页数:2
相关论文
共 50 条