MHD code using multi graphical processing units: SMAUG

被引:1
作者
Gyenge, N. [1 ,2 ,4 ]
Griffiths, M. K. [1 ,3 ]
Erdelyi, R. [1 ,4 ]
机构
[1] Univ Sheffield, Sch Math & Stat, SP2RC, Solar Phys & Space Plasmas Res Ctr, Hounsfield Rd, Sheffield S3 7RH, S Yorkshire, England
[2] Hungarian Acad Sci, Res Ctr Astron & Earth Sci, Debrecen Heliophys Observ DHO, Konkoly Observ, POB 30, H-4010 Debrecen, Hungary
[3] Univ Sheffield, Corp Informat & Comp Serv, 10-12 Brunswick St, Sheffield S10 2FN, S Yorkshire, England
[4] Eotvos Lorand Univ, Dept Astron, H-1518 Budapest, Hungary
基金
英国科学技术设施理事会;
关键词
Numerical simulations; Magnetohydrodynamics; Graphical processing units; Sheffield advanced code; RADIATION MAGNETOHYDRODYNAMICS CODE; 2 SPACE DIMENSIONS; GRAVITATIONALLY-STRATIFIED MEDIA; ASTROPHYSICAL FLOWS; HYDRODYNAMIC ALGORITHMS; SIMULATIONS; TESTS; ZEUS-2D; SYSTEMS;
D O I
10.1016/j.asr.2017.10.027
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 x 1000, 2044 x 2044, 4000 x 4000 and 8000 x 8000. We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems. (C) 2017 COSPAR. Published by Elsevier Ltd. All rights reserved.
引用
收藏
页码:683 / 690
页数:8
相关论文
共 50 条
  • [41] Fibre segment interferometry using code-division multiplexed optical signal processing for strain sensing applications
    Kissinger, Thomas
    Charrett, Thomas O. H.
    Tatam, Ralph P.
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2013, 24 (09)
  • [42] Bootstrap current control studies in the Wendelstein 7-X stellarator using the free-plasma-boundary version of the SIESTA MHD equilibrium code
    Peraza-Rodriguez, H.
    Reynolds-Barredo, J. M.
    Sanchez, R.
    Tribaldos, V.
    Geiger, J.
    PLASMA PHYSICS AND CONTROLLED FUSION, 2018, 60 (02)
  • [43] Multinode Multi-GPU Two-Electron Integrals: Code Generation Using the Regent Language
    Johnson, K. Grace
    Mirchandaney, Seema
    Hoag, Ellis
    Heirich, Alan
    Aiken, Alex
    Martinez, Todd J.
    JOURNAL OF CHEMICAL THEORY AND COMPUTATION, 2022, 18 (11) : 6522 - 6536
  • [44] Multi-step processing of single cells using semi-permeable capsules
    Leonaviciene, Greta
    Leonavicius, Karolis
    Meskys, Rolandas
    Mazutis, Linas
    LAB ON A CHIP, 2020, 20 (21) : 4052 - 4062
  • [45] Implicit block data-parallel relaxation scheme of Navier-Stokes equations using graphics processing units
    Zhou, Bohao
    Huang, Xudong
    Zhang, Ke
    Bi, Dianfang
    Zhou, Ming
    PHYSICS OF FLUIDS, 2022, 34 (11)
  • [46] A scalable algorithm for many-body dissipative particle dynamics using multiple general purpose graphic processing units
    Di Giusto, Davide
    Castagna, Jony
    COMPUTER PHYSICS COMMUNICATIONS, 2022, 280
  • [47] A multi-scale and multi-physics approach to main steam line break accidents using coupled MASTER/CUPID/MARS code
    Park, Ik Kyu
    Lee, Jae Ryong
    Choi, Yong Hee
    Kang, Doo Hyuk
    ANNALS OF NUCLEAR ENERGY, 2020, 135
  • [48] One-dimensional numerical investigation on the formation of Z-pinch dynamic hohlraum using the code MULTI
    Wu Fu-Yuan
    Chu Yan-Yun
    Ye Fan
    Li Zheng-Hong
    Yang Jian-Lun
    Ramis, Rafael
    Wang Zhen
    Qi Jian-Min
    Zhou Lin
    Liang Chuan
    ACTA PHYSICA SINICA, 2017, 66 (21)
  • [49] Time-Domain Power Quality State Estimation Based on Kalman Filter Using Parallel Computing on Graphics Processing Units
    Cisneros-Magana, Rafael
    Medina, Aurelio
    Dinavahi, Venkata
    Ramos-Paz, Antonio
    IEEE ACCESS, 2018, 6 : 21152 - 21163
  • [50] Verification of the multi-group diffusion code AZNHEX using the OECD/NEA UAM Sodium Fast Reactor Benchmark
    del-Valle-Gallegos, Edmundo
    Lopez-Solis, Roberto
    Arriaga-Ramirez, Lucero
    Gomez-Torres, Armando
    Puente-Espel, Federico
    ANNALS OF NUCLEAR ENERGY, 2018, 114 : 592 - 602