Incompressible Fluid Simulation Parallelization with OpenMP, MPI and CUDA

被引:0
|
作者
Jiang, Xuan [1 ]
Lu, Laurence [2 ]
Song, Linyue [3 ]
机构
[1] Univ Calif Berkeley, Civil & Environm Engn Dept, Berkeley, CA 94720 USA
[2] Univ Calif Berkeley, Elect Engn & Comp Sci Dept, Berkeley, CA USA
[3] Univ Calif Berkeley, Dept Comp Sci, Berkeley, CA USA
来源
ADVANCES IN INFORMATION AND COMMUNICATION, FICC, VOL 2 | 2023年 / 652卷
关键词
OpenMP; MPI; CUDA; Fluid Simulation; Parallel Computation;
D O I
10.1007/978-3-031-28073-3_28
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We note that we base our initial serial implementation off the original code presented in Jos Stam's paper. In the initial implementation, it was easiest to implement OpenMP. Because of the grid-based nature of the solver implementation and the shared-memory nature of OpenMP, the serial implementation did not require the management of mutexes or otherwise any data locks, and the pragmas could be inserted without inducing data races in the code. We also note that due to the Gauss-Seidel method, which in solving a linear system only requires intermediate steps, it is possible to introduce errors that cascade due to relying on neighboring cells which have already been updated. However, this issue is avoidable by looping over every cell in two passes such that each pass constitutes a disjoint checkerboard pattern. To be specific, the set bnd function for enforcing boundary conditions has two main parts, enforcing the edges and the corners, respectively. However, this imposes a strange implementation where we dedicate exactly a single block and a single thread to an additional kernel that resolves the corners, but it's almost not impacting the performance at all and the most time consuming parts of our implementation are cudaMalloc and cudaMemcpy. The only synchronization primitive that this code uses is _ _syncthreads(). We carefully avoided using atomic operations which will be pretty expensive, but we need _ _syncthreads() during the end of diffuse, project and advect because we reset the boundaries of the fluid every time after diffusing and advecting. We also note that similar data races are introduced here without the two passes method mentioned in the previous OpenMP section. Similar to the OpenMP implementation, the pure MPI implementation inherits many of the features of the serial implementation. However, our implementation also performs domain decomposition and the communication necessary. Synchronization is performed through these communication steps, although the local nature of the simulation means that there is no implicit global barrier and much computation can be done almost asynchronously.
引用
收藏
页码:385 / 395
页数:11
相关论文
共 50 条
  • [21] Parallelization of Reverse Time Migration Using MPI plus OpenMP
    Akanksha, Kansara S.
    Kumar, Gardas Naresh
    PROCEEDINGS OF 2016 INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION CONTROL AND COMPUTING TECHNOLOGIES (ICACCCT), 2016, : 695 - 697
  • [22] Parallelization of the Streamline Simulation based on CUDA
    Luo, Mulan
    Wang, Xu-Sheng
    Ji, Xiaohui
    2019 THE 3RD INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPILATION, COMPUTING AND COMMUNICATIONS (HP3C 2019), 2019, : 8 - 12
  • [23] PARALLEL SIMULATION OF A FLUID FLOW BY MEANS OF THE SPH METHOD: OPENMP VS. MPI COMPARISON
    Wroblewski, Pawel
    Boryczko, Krzysztof
    COMPUTING AND INFORMATICS, 2009, 28 (01) : 139 - 150
  • [24] Parallelization of a 3-Dimensional Hydrodynamics Model Using a Hybrid Method with MPI and OpenMP
    Ahn, Jung Min
    Kim, Hongtae
    Cho, Jae Gab
    Kang, Taegu
    Kim, Yong-seok
    Kim, Jungwook
    PROCESSES, 2021, 9 (09)
  • [25] A Hybrid MPI/OpenMP Parallelization Scheme Based on Nested FDTD for Parametric Decay Instability
    He, Linglei
    Chen, Jing
    Lu, Jie
    Yan, Yubo
    Yang, Jutao
    Yuan, Guang
    Hao, Shuji
    Li, Qingliang
    ATMOSPHERE, 2022, 13 (03)
  • [26] Teaching High-performance Computing Systems - A Case Study with Parallel Programming APIs: MPI, OpenMP and CUDA
    Czarnul, Pawel
    Matuszek, Mariusz
    Krzywaniak, Adam
    COMPUTATIONAL SCIENCE, ICCS 2024, PT VII, 2024, 14838 : 398 - 412
  • [27] A Hybrid MPI/OpenMP Parallelization of K-Means Algorithms Accelerated Using the Triangle Inequality
    Kwedlo, Wojciech
    Czochanski, Pawel J.
    IEEE ACCESS, 2019, 7 : 42280 - 42297
  • [28] A parallel MPI plus OpenMP plus OpenCL algorithm for hybrid supercomputations of incompressible flows
    Gorobets, A. V.
    Trias, F. X.
    Oliva, A.
    COMPUTERS & FLUIDS, 2013, 88 : 764 - 772
  • [29] Performance prediction through simulation of a hybrid MPI/OpenMP application
    Aversa, R
    Di Martino, B
    Rak, M
    Venticinque, S
    Villano, U
    PARALLEL COMPUTING, 2005, 31 (10-12) : 1013 - 1033
  • [30] MPI Correctness Checking for OpenMP/MPI Applications
    Hilbrich, Tobias
    Mueller, Matthias S.
    Krammer, Bettina
    INTERNATIONAL JOURNAL OF PARALLEL PROGRAMMING, 2009, 37 (03) : 277 - 291