Hybrid MPI and CUDA Parallelization for CFD Applications on Multi-GPU HPC Clusters

被引:29
作者
Lai, Jianqi [1 ]
Yu, Hang [1 ]
Tian, Zhengyu [1 ]
Li, Hua [1 ]
机构
[1] Natl Univ Def Technol, Coll Aerosp Sci & Engn, Changsha 410073, Peoples R China
关键词
DIRECT NUMERICAL-SIMULATION; FLOW SOLVER; MESHLESS METHOD; OPTIMIZATION; CPU/GPU; SEQUEL; SCHEME; GRIDS;
D O I
10.1155/2020/8862123
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Graphics processing units (GPUs) have a strong floating-point capability and a high memory bandwidth in data parallelism and have been widely used in high-performance computing (HPC). Compute unified device architecture (CUDA) is used as a parallel computing platform and programming model for the GPU to reduce the complexity of programming. The programmable GPUs are becoming popular in computational fluid dynamics (CFD) applications. In this work, we propose a hybrid parallel algorithm of the message passing interface and CUDA for CFD applications on multi-GPU HPC clusters. The AUSM + UP upwind scheme and the three-step Runge-Kutta method are used for spatial discretization and time discretization, respectively. The turbulent solution is solved by theK-omega SST two-equation model. The CPU only manages the execution of the GPU and communication, and the GPU is responsible for data processing. Parallel execution and memory access optimizations are used to optimize the GPU-based CFD codes. We propose a nonblocking communication method to fully overlap GPU computing, CPU_CPU communication, and CPU_GPU data transfer by creating two CUDA streams. Furthermore, the one-dimensional domain decomposition method is used to balance the workload among GPUs. Finally, we evaluate the hybrid parallel algorithm with the compressible turbulent flow over a flat plate. The performance of a single GPU implementation and the scalability of multi-GPU clusters are discussed. Performance measurements show that multi-GPU parallelization can achieve a speedup of more than 36 times with respect to CPU-based parallel computing, and the parallel algorithm has good scalability.
引用
收藏
页数:15
相关论文
共 52 条
[1]   Parallelization Strategies for Computational Fluid Dynamics Software: State of the Art Review [J].
Afzal, Asif ;
Ansari, Zahid ;
Faizabadi, Ahmed Rimaz ;
Ramis, M. K. .
ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING, 2017, 24 (02) :337-363
[2]  
[Anonymous], 2009, P 47 AIAA AER SCI M
[3]   Optimized large-message broadcast for deep learning workloads: MPI, MPI plus NCCL, or NCCL2? [J].
Awan, Ammar Ahmad ;
Manian, Karthik Vadambacheri ;
Chu, Ching-Hsiang ;
Subramoni, Hari ;
Panda, Dhabaleswar K. .
PARALLEL COMPUTING, 2019, 85 :141-152
[4]  
Baghapour B., 2016, P 46 AIAA FLUID DYN
[5]  
Blazek J., 2015, COMPUTATIONAL FLUID, P122
[6]   Acceleration of a two-dimensional Euler flow solver using commodity graphics hardware [J].
Brandvik, T. ;
Pullan, G. .
PROCEEDINGS OF THE INSTITUTION OF MECHANICAL ENGINEERS PART C-JOURNAL OF MECHANICAL ENGINEERING SCIENCE, 2007, 221 (12) :1745-1748
[7]  
Brandvik T., 2008, P 46 AIAA AER SCI M
[8]   Considerations in using OpenCL on GPUs and FPGAs for throughput-oriented genomics workloads [J].
Cadenelli, Nicola ;
Jaksic, Zoran ;
Polo, Jorda ;
Carrera, David .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2019, 94 :148-159
[9]   Massively parallel lattice-Boltzmann codes on large GPU clusters [J].
Calore, E. ;
Gabbana, A. ;
Kraus, J. ;
Pellegrini, E. ;
Schifano, S. F. ;
Tripiccione, R. .
PARALLEL COMPUTING, 2016, 58 :1-24
[10]   CPU/GPU computing for a multi-block structured grid based high-order flow solver on a large heterogeneous system [J].
Cao, Wei ;
Xu, Chuan-fu ;
Wang, Zheng-hua ;
Yao, Lu ;
Liu, Hua-yong .
CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2014, 17 (02) :255-270