Automatic Parallelization: Executing Sequential Programs on a Task-Based Parallel Runtime

被引:0
作者
Alcides Fonseca
Bruno Cabral
João Rafael
Ivo Correia
机构
[1] Universidade de Coimbra,Department of Informatics Engineering
来源
International Journal of Parallel Programming | 2016年 / 44卷
关键词
Automatic parallelization; Task-based runtime; Symbolic analysis;
D O I
暂无
中图分类号
学科分类号
摘要
There are billions of lines of sequential code inside nowadays’ software which do not benefit from the parallelism available in modern multicore architectures. Automatically parallelizing sequential code, to promote an efficient use of the available parallelism, has been a research goal for some time now. This work proposes a new approach for achieving such goal. We created a new parallelizing compiler that analyses the read and write instructions, and control-flow modifications in programs to identify a set of dependencies between the instructions in the program. Afterwards, the compiler, based on the generated dependencies graph, rewrites and organizes the program in a task-oriented structure. Parallel tasks are composed by instructions that cannot be executed in parallel. A work-stealing-based parallel runtime is responsible for scheduling and managing the granularity of the generated tasks. Furthermore, a compile-time granularity control mechanism also avoids creating unnecessary data-structures. This work focuses on the Java language, but the techniques are general enough to be applied to other programming languages. We have evaluated our approach on 8 benchmark programs against OoOJava, achieving higher speedups. In some cases, values were close to those of a manual parallelization. The resulting parallel code also has the advantage of being readable and easily configured to improve further its performance manually.
引用
收藏
页码:1337 / 1358
页数:21
相关论文
共 47 条
[1]  
Ayguadé E(2009)The design of openmp tasks IEEE Trans. Parallel Distrib. Syst. 20 404-418
[2]  
Copty N(1993)Automatic program parallelization Proc. IEEE 81 211-243
[3]  
Duran A(1997)Automatically exploiting implicit parallelism in Java Concurr. Pract. Exp. 9 579-619
[4]  
Hoeflinger J(2007)Parallel programmability and the chapel language Int. J. High Perform. Comput. Appl. 21 291-312
[5]  
Lin Y(2004)Run-time support for the automatic parallelization of Java programs J. Supercomput. 28 91-117
[6]  
Massaioli F(1998)Openmp: an industry standard api for shared-memory programming IEEE Comput. Sci. Eng. 5 46-55
[7]  
Teruel X(2009)Cetus: a source-to-source compiler infrastructure for multicores Computer 12 36-42
[8]  
Unnikrishnan P(2009)OpenMP to GPGPU: a compiler framework for automatic translation and optimization ACM SIGPLAN Not. 44 101-110
[9]  
Zhang G(2002)Using data groups to specify and check side effects ACM SIGPLAN Not. 37 246-257
[10]  
Banerjee U(2006)Parallel programming and parallel abstractions in fortress Lect. Not. Comput. Sci. 3945 1-undefined