A massive MPI parallel framework of smoothed particle hydrodynamics with optimized memory management for extreme mechanics problems

被引:9
作者
Liu, Jiahao [1 ,2 ]
Yang, Xiufeng [3 ]
Zhang, Zhilang [4 ]
Liu, Moubin [1 ,2 ]
机构
[1] Peking Univ, Coll Engn, Beijing 100871, Peoples R China
[2] Laoshan Lab, Joint Lab Marine Hydrodynam & Ocean Engn, Qingdao 266237, Peoples R China
[3] Beijing Inst Technol, Sch Aerosp Engn, Beijing 100081, Peoples R China
[4] Swiss Fed Inst Technol, Dept Mech & Proc Engn, CH-8092 Zurich, Switzerland
基金
中国国家自然科学基金;
关键词
Smoothed particle hydrodynamics; Message passing interface; Massive high performance computing; Memory management; Extreme mechanics problems; MODELING INCOMPRESSIBLE FLOWS; HIGH-VELOCITY IMPACT; SHAPED-CHARGE; HYPERVELOCITY IMPACT; SIMULATION; SPH; PENETRATION; CODE; CONSERVATION; DAMAGE;
D O I
10.1016/j.cpc.2023.108970
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The dynamic failure process of structures under extreme loadings is very common in many fields of engineering and science. The smoothed particle hydrodynamics (SPH) method offers inherent benefits in dealing with complex interfaces and large material deformations in extreme mechanics problems. However, SPH simulations for 3D engineering applications are time-consuming. To address this issue, we introduce MPI (Message Passing Interface) in our SPH scheme to reduce computational time. Some optimizations are adopted to ensure the massive computation of the SPH method. In particular, an optimized memory management strategy is developed to control the memory footprint. With the present MPI-based massive parallelization of the SPH method, several validation examples are tested and analyzed. By comparing the present numerical results with the reference data, the dynamic failure process of complex structures subjected to extreme loadings like explosive and impact loadings can be well captured. A large number of particles, up to 2.04 billion, are adopted in the present simulations. The scaling tests show that the scalability of the massively parallel SPH program achieves a maximum parallel efficiency of 97% on 10020 CPU cores.
引用
收藏
页数:20
相关论文
共 5 条
  • [1] A novel MPI-based parallel smoothed particle hydrodynamics framework with dynamic load balancing for free surface flow
    Zhu, Guixun
    Hughes, Jason
    Zheng, Siming
    Greaves, Deborah
    COMPUTER PHYSICS COMMUNICATIONS, 2023, 284
  • [2] Smoothed particle hydrodynamics method for free surface flow based on MPI parallel computing
    Long, Sifan
    Wong, Kelvin K. L.
    Fan, Xiaokang
    Guo, Xiaowei
    Yang, Canqun
    FRONTIERS IN PHYSICS, 2023, 11
  • [3] FleCSPH: a Parallel and Distributed Smoothed Particle Hydrodynamics Framework Based on FleCSI
    Loiseau, Julien
    Lim, Hyun
    Bergen, Ben K.
    Moss, Nicholas D.
    Alin, Francois
    PROCEEDINGS 2018 INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING & SIMULATION (HPCS), 2018, : 484 - 491
  • [4] Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing
    Nishiura, Daisuke
    Furuichi, Mikito
    Sakaguchi, Hide
    COMPUTER PHYSICS COMMUNICATIONS, 2015, 194 : 18 - 32
  • [5] juSPH: A Julia-based open-source package of parallel Smoothed Particle Hydrodynamics (SPH) for dam break problems
    Luo, Mimi
    Qin, Jiayu
    Mei, Gang
    SOFTWAREX, 2022, 19