MHD code using multi graphical processing units: SMAUG

被引:1
作者
Gyenge, N. [1 ,2 ,4 ]
Griffiths, M. K. [1 ,3 ]
Erdelyi, R. [1 ,4 ]
机构
[1] Univ Sheffield, Sch Math & Stat, SP2RC, Solar Phys & Space Plasmas Res Ctr, Hounsfield Rd, Sheffield S3 7RH, S Yorkshire, England
[2] Hungarian Acad Sci, Res Ctr Astron & Earth Sci, Debrecen Heliophys Observ DHO, Konkoly Observ, POB 30, H-4010 Debrecen, Hungary
[3] Univ Sheffield, Corp Informat & Comp Serv, 10-12 Brunswick St, Sheffield S10 2FN, S Yorkshire, England
[4] Eotvos Lorand Univ, Dept Astron, H-1518 Budapest, Hungary
基金
英国科学技术设施理事会;
关键词
Numerical simulations; Magnetohydrodynamics; Graphical processing units; Sheffield advanced code; RADIATION MAGNETOHYDRODYNAMICS CODE; 2 SPACE DIMENSIONS; GRAVITATIONALLY-STRATIFIED MEDIA; ASTROPHYSICAL FLOWS; HYDRODYNAMIC ALGORITHMS; SIMULATIONS; TESTS; ZEUS-2D; SYSTEMS;
D O I
10.1016/j.asr.2017.10.027
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 x 1000, 2044 x 2044, 4000 x 4000 and 8000 x 8000. We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems. (C) 2017 COSPAR. Published by Elsevier Ltd. All rights reserved.
引用
收藏
页码:683 / 690
页数:8
相关论文
共 50 条
  • [21] NBSymple, a double parallel, symplectic N-body code running on graphic processing units
    Capuzzo-Dolcetta, R.
    Mastrobuono-Battisti, A.
    Maschietti, D.
    NEW ASTRONOMY, 2011, 16 (04) : 284 - 295
  • [22] Study of magnetic island using a 3D MHD equilibrium calculation code
    Suzuki, Yasuhiro
    Sakakibara, Satoru
    Watanabe, Kiyomasa
    Narushima, Yoshiro
    Ohdachi, Satoshi
    Yamamoto, Satoshi
    Okada, Hiroyuki
    Plasma and Fusion Research, 2011, 6 (1 SPECIAL ISSUE)
  • [23] Accelerating the Gillespie τ-Leaping Method Using Graphics Processing Units
    Komarov, Ivan
    D'Souza, Roshan M.
    Tapia, Jose-Juan
    PLOS ONE, 2012, 7 (06):
  • [24] THE PIERNIK MHD CODE - A MULTI-FLUID, NON-IDEAL EXTENSION OF THE RELAXING-TVD SCHEME (I)
    Hanasz, M.
    Kowalik, K.
    Woltanski, D.
    Pawlaszek, R.
    EXTRASOLAR PLANETS IN MULTI-BODY SYSTEMS: THEORY AND OBSERVATIONS, 2010, 42 : 275 - 280
  • [25] Random Forest Training Stage Acceleration using Graphics Processing Units
    Hernandez, Abian
    Fabelo, Himar
    Ortega, Samuel
    Baez, Abelardo
    Callico, Gustavo M.
    Sarmiento, Roberto
    2017 32ND CONFERENCE ON DESIGN OF CIRCUITS AND INTEGRATED SYSTEMS (DCIS), 2017,
  • [26] Reflector Antenna Analysis using Physical Optics on Graphics Processing Units
    Borries, Oscar
    Sorensen, Hans Henrik Brandenborg
    Dammann, Bernd
    Jorgensen, Erik
    Meincke, Peter
    Sorensen, Stig Busk
    Hansen, Per Christian
    2014 8TH EUROPEAN CONFERENCE ON ANTENNAS AND PROPAGATION (EUCAP), 2014, : 254 - +
  • [27] Fast Equilibration of Water between Buried Sites and the Bulk by Molecular Dynamics with Parallel Monte Carlo Water Moves on Graphical Processing Units
    Ben-Shalom, Ido Y.
    Lin, Charles
    Radak, Brian K.
    Sherman, Woody
    Gilson, Michael K.
    JOURNAL OF CHEMICAL THEORY AND COMPUTATION, 2021, 17 (12) : 7366 - 7372
  • [28] Topology Estimation Using Graphical Models in Multi-Phase Power Distribution Grids
    Deka, Deepjyoti
    Chertkov, Michael
    Backhaus, Scott
    IEEE TRANSACTIONS ON POWER SYSTEMS, 2020, 35 (03) : 1663 - 1673
  • [29] Accelerating Petri-Net simulations using NVIDIA Graphics Processing Units
    Yianni, Panayioti C.
    Neves, Luis C.
    Rama, Dovile
    Andrews, John D.
    EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2018, 265 (01) : 361 - 371
  • [30] Parallel Computation of Trajectories Using Graphics Processing Units and Interpolated Gravity Models
    Arora, Nitin
    Vittaldev, Vivek
    Russell, Ryan P.
    JOURNAL OF GUIDANCE CONTROL AND DYNAMICS, 2015, 38 (08) : 1345 - 1355