Automatic parallelization of a class of irregular loops for distributed memory systems

被引:0
作者
Ravishankar, Mahesh [1 ]
Eisenlohr, John [1 ]
Pouchet, Louis-Noël [2 ]
Ramanujam, J. [3 ]
Rountev, Atanas [1 ]
Sadayappan, P. [1 ]
机构
[1] University of California, Los Angeles
来源
Ravishankar, Mahesh (ravishan@cse.ohio-state.edu) | 1600年 / Association for Computing Machinery, 2 Penn Plaza, Suite 701, New York, NY 10121-0701, United States卷 / 01期
基金
美国国家科学基金会;
关键词
Distributed-memory systems; Inspector-executor; Irregular applications; Parallelization;
D O I
10.1145/2660251
中图分类号
学科分类号
摘要
Many scientific applications spend significant time within loops that are parallel, except for dependences from associative reduction operations. However these loops often contain data-dependent control-flow and array-access patterns. Traditional optimizations that rely on purely static analysis fail to generate parallel code in such cases. This article proposes an approach for automatic parallelization for distributed memory environments, using both static and runtime analysis. We formalize the computations that are targeted by this approach and develop algorithms to detect such computations. We also describe algorithms to generate a parallel inspector that performs a runtime analysis of control-flow and array-access patterns, and a parallel executor to take advantage of this information. The effectiveness of the approach is demonstrated on several benchmarks that were automatically transformed using a prototype compiler. For these, the inspector overheads and performance of the executor code were measured. The benefit on real-world applications was also demonstrated through similar manual transformations of an atmospheric modeling software. © 2014 ACM.
引用
收藏
相关论文
共 50 条
  • [1] Agrawal G., Saltz J., Das R., Interprocedural partial redundancy elimination and its application to distributed memory compilation, Proceedings of the ACM Conference on Programming Language Design and Implementation., (1995)
  • [2] Anantpur J., Govindarajan R., Runtime dependence computation and execution of loops on heterogeneous systems, Proceedings of the IEEE/ACM International Symposium on Code Generation and Optimization., (2013)
  • [3] Balay S., Brown J., Buschelman K., Gropp W.D., Kaushik D., Knepley M.G., McInnes L.C., Smith B.F., Zhang H., PETSc, (2012)
  • [4] Bao H., Bielak J., Ghattas O., Kallivokas L.F., O'Hallaron D.R., Shewchuk J.R., Xu J., Largescale simulation of elastic wave propagation in heterogeneous media on parallel computers, Comput. Meth. Appl. Mech. Eng., 152, pp. 85-102, (1998)
  • [5] Baskaran M.M., Ramanujam J., Sadayappan P., Automatic C-to-CUDA code generation for affine programs, Compiler Construction, (2010)
  • [6] Basumallik A., Eigenmann R., Towards automatic translation of openmp to mpi, Proceedings of the International Conference on Supercomputing., (2005)
  • [7] Basumallik A., Eigenmann R., Optimizing irregular shared-memory applications for distributedmemory systems, Proceedings of the International Conference on Principles and Paradigms of Parallel Programming., (2006)
  • [8] Benabderrahmane M.-W., Pouchet L.-N., Cohen A., Bastoul C., The polyhedral model is more widely applicable than you think, Compiler Construction, pp. 283-303, (2010)
  • [9] Berryman H., Saltz J., Scroggs J., Execution time support for adaptive scientific algorithms on distributed memory machines, Concurrency: Pract. Exper., 3, 3, pp. 159-178, (1991)
  • [10] Bondhugula U., Gunluk O., Dash S., Renganarayanan L., A model for fusion and code motion in an automatic parallelizing compiler, Proceedings of the International Conference on Parallel Architectures and Compilation Techniques., (2010)