Automatic translation of data parallel programs for heterogeneous parallelism through OpenMP offloading

被引:0
作者
Farui Wang
Weizhe Zhang
Haonan Guo
Meng Hao
Gangzhao Lu
Zheng Wang
机构
[1] Harbin Institute of Technology,School of Computer Science and Technology
[2] University of Leeds,School of Computing
来源
The Journal of Supercomputing | 2021年 / 77卷
关键词
Heterogeneous computing; Source-to-source translation; OpenMP offloading; Compilation optimization; GPUs;
D O I
暂无
中图分类号
学科分类号
摘要
Heterogeneous multicores like GPGPUs are now commonplace in modern computing systems. Although heterogeneous multicores offer the potential for high performance, programmers are struggling to program such systems. This paper presents OAO, a compiler-based approach to automatically translate shared-memory OpenMP data-parallel programs to run on heterogeneous multicores through OpenMP offloading directives. Given the large user base of shared memory OpenMP programs, our approach allows programmers to continue using a single-source-based programming language that they are familiar with while benefiting from the heterogeneous performance. OAO introduces a novel runtime optimization scheme to automatically eliminate unnecessary host–device communication to minimize the communication overhead between the host and the accelerator device. We evaluate OAO by applying it to 23 benchmarks from the PolyBench and Rodinia suites on two distinct GPU platforms. Experimental results show that OAO achieves up to 32×\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\times$$\end{document} speedup over the original OpenMP version, and can reduce the host–device communication overhead by up to 99% over the hand-translated version.
引用
收藏
页码:4957 / 4987
页数:30
相关论文
共 32 条
  • [1] Mendonça G(2017)DAWNCC: automatic annotation for data parallelism and offloading ACM Trans Archit Code Optim (TACO) 14 13-26
  • [2] Guimarães B(2015)Bones: an automatic skeleton-based c-to-cuda compiler for gpus ACM Trans Arch Code Optim (TACO) 11 35-139
  • [3] Alves P(2019)Transparent acceleration for heterogeneous platforms with compilation to opencl ACM Trans Arch Code Optim (TACO) 16 1-3841
  • [4] Pereira M(2019)Data-flow analysis and optimization for data coherence in heterogeneous architectures J Parallel Distrib Comput 130 126-507
  • [5] Araújo G(2013)Polyhedral parallel code generation for cuda ACM Trans Arch Code Optim (TACO) 9 54-1192
  • [6] Pereira FMQ(2019)Not: a high-level no-threading parallel programming method for heterogeneous systems J Supercomput 75 3810-undefined
  • [7] Nugteren C(2015)Dwarfcode: a performance prediction tool for parallel applications IEEE Trans Comput 65 495-undefined
  • [8] Corporaal H(2017)Predicting hpc parallel program performance based on llvm compiler Cluster Comput 20 1179-undefined
  • [9] Riebler H(undefined)undefined undefined undefined undefined-undefined
  • [10] Vaz G(undefined)undefined undefined undefined undefined-undefined