Estimating the overhead and coupling of scientific computing clusters

被引:1
|
作者
Vivas, Aurelio [1 ]
Castro, Harold [1 ]
机构
[1] Univ Los Andes, Dept Comp & Syst Engn, COMIT Res Grp, Ave 1 18A-12, Bogota 111711, Colombia
来源
SIMULATION-TRANSACTIONS OF THE SOCIETY FOR MODELING AND SIMULATION INTERNATIONAL | 2023年 / 99卷 / 03期
关键词
High-performance computing; cluster computing; performance evaluation; parallel overhead; cluster overhead; coupling; PERFORMANCE EVALUATION; CLOUD;
D O I
10.1177/00375497211064198
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Since simulation became the third pillar of scientific research, several forms of computers have become available to drive computer aided simulations, and nowadays, clusters are the most popular type of computers supporting these tasks. For instance, cluster settings, such as the so-called supercomputers, cluster of workstations (COW), cluster of desktops (COD), and cluster of virtual machines (COV) have been considered in literature to embrace a variety of scientific applications. However, those scientific applications categorized as high-performance computing (HPC) are conceptually restricted to be addressed only by supercomputers. In this aspect, we introduce the notions of cluster overhead and cluster coupling to assess the capacity of non-HPC systems to handle HPC applications. We also compare the cluster overhead with an existing measure of overhead in computing systems, the total parallel overhead, to explain the correctness of our methodology. The evaluation of capacity considers the seven dwarfs of scientific computing, which are well-known, scientific computing building blocks considered in the development of HPC applications. The evaluation of these building blocks provides insights regarding the strengths and weaknesses of non-HPC systems to deal with future HPC applications developed with one or a combination of these algorithmic building blocks.
引用
收藏
页码:245 / 261
页数:17
相关论文
共 50 条
  • [21] Job co-allocation strategies for multiple high performance computing clusters
    Qin, Jinhui
    Bauer, Michael A.
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2009, 12 (03): : 323 - 340
  • [22] Job co-allocation strategies for multiple high performance computing clusters
    Jinhui Qin
    Michael A. Bauer
    Cluster Computing, 2009, 12 : 323 - 340
  • [23] WBTK: A new set of microbenchmarks to explore memory system performance for scientific computing
    Jalby, W
    Lemuet, C
    Le Pasteur, X
    INTERNATIONAL JOURNAL OF HIGH PERFORMANCE COMPUTING APPLICATIONS, 2004, 18 (02) : 211 - 224
  • [24] An elasticity model for High Throughput Computing clusters
    Montero, Ruben S.
    Moreno-Vozrnediano, Rafael
    Llorente, Ignacio M.
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2011, 71 (06) : 750 - 757
  • [25] On Estimating Actuation Delays in Elastic Computing Systems
    Gambi, Alessio
    Moldovan, Daniel
    Copil, Georgiana
    Hong-Linh Truong
    Dustdar, Schahram
    PROCEEDINGS OF THE 8TH INTERNATIONAL SYMPOSIUM ON SOFTWARE ENGINEERING FOR ADAPTIVE AND SELF-MANAGING SYSTEMS (SEAMS 2013), 2013, : 33 - 42
  • [26] Scientific Computing Doesn't Need noSQL
    Butler, David M.
    2012 SC COMPANION: HIGH PERFORMANCE COMPUTING, NETWORKING, STORAGE AND ANALYSIS (SCC), 2012, : 1301 - 1302
  • [27] TECHNOLOGIES FOR LARGE DATA MANAGEMENT IN SCIENTIFIC COMPUTING
    Pace, Alberto
    INTERNATIONAL JOURNAL OF MODERN PHYSICS C, 2014, 25 (02):
  • [28] Simplifying the Use of Clouds for Scientific Computing with Everest
    Volkov, Sergey
    Sukhoroslov, Oleg
    6TH INTERNATIONAL YOUNG SCIENTIST CONFERENCE ON COMPUTATIONAL SCIENCE, YSC 2017, 2017, 119 : 112 - 120
  • [29] Neural Computing for Scientific Computing Applications More than Just Machine Learning
    Aimone, James B.
    Parekh, Ojas
    Severa, William
    PROCEEDINGS OF NEUROMORPHIC COMPUTING SYMPOSIUM (NCS 2017), 2017,
  • [30] Network interface active messages for low overhead communication on SMP PC clusters
    Matsuda, M
    Tanaka, Y
    Kubota, K
    Sato, M
    FUTURE GENERATION COMPUTER SYSTEMS, 2000, 16 (05) : 493 - 502