The Tao of Parallelism in Algorithms

被引:98
作者
Pingali, Keshav [1 ,2 ]
Donald Nguyen [1 ]
Kulkarni, Milind [4 ]
Burtscher, Martin [3 ]
Hassaan, M. Amber
Kaleem, Rashid [1 ]
Lee, Tsung-Hsien
Lenharth, Andrew [2 ]
Manevich, Roman [2 ]
Mendez-Lojo, Mario [2 ]
Prountzos, Dimitrios [1 ]
Sui, Xin [1 ]
机构
[1] Univ Texas Austin, Dept Comp Sci, Austin, TX 78712 USA
[2] Univ Texas Austin, Inst Computat Engn & Sci, Austin, TX 78712 USA
[3] Texas State Univ San Marcos, Dept Comp Sci, San Marcos, TX USA
[4] Purdue Univ, Sch Elect & Comp Engn, W Lafayette, IN 47907 USA
基金
美国国家科学基金会;
关键词
Algorithms; Languages; Performance; amorphous data-parallelism; Galois system; irregular programs; operator formulation; tao-analysis;
D O I
10.1145/1993316.1993501
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
For more than thirty years, the parallel programming community has used the dependence graph as the main abstraction for reasoning about and exploiting parallelism in "regular" algorithms that use dense arrays, such as finite-differences and FFTs. In this paper, we argue that the dependence graph is not a suitable abstraction for algorithms in new application areas like machine learning and network analysis in which the key data structures are "irregular" data structures like graphs, trees, and sets. To address the need for better abstractions, we introduce a data-centric formulation of algorithms called the operator formulation in which an algorithm is expressed in terms of its action on data structures. This formulation is the basis for a structural analysis of algorithms that we call tao-analysis. Tao-analysis can be viewed as an abstraction of algorithms that distills out algorithmic properties important for parallelization. It reveals that a generalized form of data-parallelism called amorphous data-parallelism is ubiquitous in algorithms, and that, depending on the tao-structure of the algorithm, this parallelism may be exploited by compile-time, inspector-executor or optimistic parallelization, thereby unifying these seemingly unrelated parallelization techniques. Regular algorithms emerge as a special case of irregular algorithms, and many application-specific optimization techniques can be generalized to a broader context. These results suggest that the operator formulation and tao-analysis of algorithms can be the foundation of a systematic approach to parallel programming.
引用
收藏
页码:12 / 25
页数:14
相关论文
共 65 条
  • [41] Cormen T., 2001, Introduction to Algorithms
  • [42] PARALLEL AND DISTRIBUTED DERIVATIONS IN THE SINGLE-PUSHOUT APPROACH
    EHRIG, H
    LOWE, M
    [J]. THEORETICAL COMPUTER SCIENCE, 1993, 109 (1-2) : 123 - 143
  • [43] Ghiya Rakesh., 1996, POPL
  • [44] HIGHLY PARALLEL SPARSE CHOLESKY FACTORIZATION
    GILBERT, JR
    SCHREIBER, R
    [J]. SIAM JOURNAL ON SCIENTIFIC AND STATISTICAL COMPUTING, 1992, 13 (05): : 1151 - 1172
  • [45] RANDOMIZED INCREMENTAL CONSTRUCTION OF DELAUNAY AND VORONOI DIAGRAMS
    GUIBAS, LJ
    KNUTH, DE
    SHARIR, M
    [J]. ALGORITHMICA, 1992, 7 (04) : 381 - 413
  • [46] Hendren L. J., 1990, IEEE Transactions on Parallel and Distributed Systems, V1, P35, DOI 10.1109/71.80123
  • [47] Herlihy M., 2008, PPOPP
  • [48] Horwitz S., 1989, PLDI
  • [49] Multilevel k-way partitioning scheme for irregular graphs
    Karypis, G
    Kumar, V
    [J]. JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 1998, 48 (01) : 96 - 129
  • [50] Kennedy Ken, 2001, Optimizing Compilers for Modern Architectures: A Dependence-based Approach