Effects of mesh loop modes on performance of unstructured finite volume GPU simulations

被引:4
作者
Weng, Yue [1 ]
Zhang, Xi [1 ]
Guo, Xiaohu [2 ]
Zhang, Xianwei [1 ]
Lu, Yutong [1 ]
Liu, Yang [3 ]
机构
[1] Sun Yat Sen Univ, Sch Comp Sci & Engn, Guangzhou, Peoples R China
[2] Hartree Ctr, STFC Daresbury Lab, Warrington, Cheshire, England
[3] China Aerodynam Res & Dev Ctr, Mianyang, Sichuan, Peoples R China
关键词
GPU; CFD; Finite volume; Unstructured mesh; Mesh loop modes; Data locality; Data dependence; CFD; SOLVERS;
D O I
10.1186/s42774-021-00073-y
中图分类号
TH [机械、仪表工业];
学科分类号
0802 ;
摘要
In unstructured finite volume method, loop on different mesh components such as cells, faces, nodes, etc is used widely for the traversal of data. Mesh loop results in direct or indirect data access that affects data locality significantly. By loop on mesh, many threads accessing the same data lead to data dependence. Both data locality and data dependence play an important part in the performance of GPU simulations. For optimizing a GPU-accelerated unstructured finite volume Computational Fluid Dynamics (CFD) program, the performance of hot spots under different loops on cells, faces, and nodes is evaluated on Nvidia Tesla V100 and K80. Numerical tests under different mesh scales show that the effects of mesh loop modes are different on data locality and data dependence. Specifically, face loop makes the best data locality, so long as access to face data exists in kernels. Cell loop brings the smallest overheads due to non-coalescing data access, when both cell and node data are used in computing without face data. Cell loop owns the best performance in the condition that only indirect access of cell data exists in kernels. Atomic operations reduced the performance of kernels largely in K80, which is not obvious on V100. With the suitable mesh loop mode in all kernels, the overall performance of GPU simulations can be increased by 15%-20%. Finally, the program on a single GPU V100 can achieve maximum 21.7 and average 14.1 speed up compared with 28 MPI tasks on two Intel CPUs Xeon Gold 6132.
引用
收藏
页数:23
相关论文
共 27 条
  • [11] Imperial College London AMCG, 2015, FLUID MAN V4 1 12
  • [12] A GPU-enabled Finite Volume solver for global magnetospheric simulations on unstructured grids
    Lani, Andrea
    Yalim, Mehmet Sarp
    Poedts, Stefaan
    [J]. COMPUTER PHYSICS COMMUNICATIONS, 2014, 185 (10) : 2538 - 2557
  • [13] A hybrid solution method for CFD applications on GPU-accelerated hybrid HPC platforms
    Liu, Xiaocheng
    Zhong, Ziming
    Xu, Kai
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2016, 56 : 759 - 765
  • [14] Lou J., 2015, 53 AIAA AER SCI M, P0822
  • [15] Martins JRRA., 2020, AIAA SCITECH FORUM
  • [16] Large-Eddy Simulation of Aircraft Wake Evolution from Roll-Up Until Vortex Decay
    Misaka, Takashi
    Holzaepfel, Frank
    Gerz, Thomas
    [J]. AIAA JOURNAL, 2015, 53 (09) : 2646 - 2670
  • [17] Park MA, 2016, AIAA20163323, DOI [10.2514/6.2016-3323, DOI 10.2514/6.2016-3323]
  • [18] A Fast and Scalable Graph Coloring Algorithm for Multi-core and Many-core Architectures
    Rokos, Georgios
    Gorman, Gerard
    Kelly, Paul H. J.
    [J]. EURO-PAR 2015: PARALLEL PROCESSING, 2015, 9233 : 414 - 425
  • [19] ZEFR: A GPU-accelerated high-order solver for compressible viscous flows using the flux reconstruction method
    Romero, J.
    Crabill, J.
    Watkins, J. E.
    Witherden, F. D.
    Jameson, A.
    [J]. COMPUTER PHYSICS COMMUNICATIONS, 2020, 250
  • [20] Slotnick J., 2013, NASACR2014218178