GPU Support for Automatic Generation of Finite-Differences Stencil Kernels

被引:0
作者
Mickus Rodrigues, Vitor Hugo [1 ]
Cavalcante, Lucas [1 ]
Pereira, Maelso Bruno [1 ]
Luporini, Fabio [2 ]
Reguly, Istvan [3 ]
Gorman, Gerard [2 ]
de Souza, Samuel Xavier [1 ]
机构
[1] Univ Fed Rio Grande do Norte, Natal, RN, Brazil
[2] Imperial Coll London, London, England
[3] Pazmany Peter Catholic Univ, Budapest, Hungary
来源
HIGH PERFORMANCE COMPUTING, CARLA 2019 | 2020年 / 1087卷
关键词
GPU; Domain Specific Languages; Finite-differences; Stencil kernels; Parallel architectures; Devito; OPS; LANGUAGE;
D O I
10.1007/978-3-030-41005-6_16
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The growth of data to be processed in the Oil & Gas industry matches the requirements imposed by evolving algorithms based on stencil computations, such as Full Waveform Inversion and Reverse Time Migration. Graphical processing units (GPUs) are an attractive architectural target for stencil computations because of its high degree of data parallelism. However, the rapid architectural and technological progression makes it difficult for even the most proficient programmers to remain up-to-date with the technological advances at a micro-architectural level. In this work, we present an extension for an open source compiler designed to produce highly optimized finite difference kernels for use in inversion methods named Devito (c). We embed it with the Oxford Parallel Domain Specific Language (OP-DSL) in order to enable automatic code generation for GPU architectures from a highlevel representation. We aim to enable users coding in a symbolic representation level to effortlessly get their implementations leveraged by the processing capacities of GPU architectures. The implemented backend is evaluated on a NVIDIA (R) GTX Titan Z, and on a NVIDIA (R) Tesla V100 in terms of operational intensity through the roof-line model for varying space-order discretization levels of 3D acoustic isotropic wave propagation stencil kernels with and without symbolic optimizations. It achieves approximately 63% of V100's peak performance and 24% of Titan Z's peak performance for stencil kernels over grids with 256(3) points. Our study reveals that improving memory usage should be the most efficient strategy for leveraging the performance of the implemented solution on the evaluated architectures.
引用
收藏
页码:230 / 244
页数:15
相关论文
共 16 条
  • [11] Reguly Istvan Z., 2014, 2014 Fourth International Workshop on Domain-Specific Languages and High-Level Frameworks for High-Performance Computing (WOLFHPC). Proceedings, P58, DOI 10.1109/WOLFHPC.2014.7
  • [12] van Engelen R., 2003, P 10 INT C SUP ICS 1, P86, DOI [10.1145/237578.237589, DOI 10.1145/237578.237589]
  • [13] Roofline: An Insightful Visual Performance Model for Multicore Architectures
    Williams, Samuel
    Waterman, Andrew
    Patterson, David
    [J]. COMMUNICATIONS OF THE ACM, 2009, 52 (04) : 65 - 76
  • [14] A large-scale framework for symbolic implementations of seismic inversion algorithms in Julia
    Witte, Philipp A.
    Louboutin, Mathias
    Kukreja, Navjot
    Luporini, Fabio
    Lange, Michael
    Gorman, Gerard J.
    Herrmann, Felix J.
    [J]. GEOPHYSICS, 2019, 84 (03) : F57 - F71
  • [15] Yount Charles, 2016, 2016 Sixth International Workshop on Domain-Specific Languages and High-Level Frameworks for High-Performance Computing (WOLFHPC). Proceedings, P30, DOI 10.1109/WOLFHPC.2016.08
  • [16] Zhang Yongpeng, 2012, CGO 12, P155