A Memory Efficient Parallel All-Pairs Computation Framework: Computation - Communication Overlap

被引:1
|
作者
Yeleswarapu, Venkata Kasi Viswanath [1 ]
Somani, Arun K. [1 ]
机构
[1] Iowa State Univ, Dept Elect & Comp Engn, Ames, IA 50010 USA
来源
PARALLEL PROCESSING AND APPLIED MATHEMATICS (PPAM 2017), PT I | 2018年 / 10777卷
基金
美国国家科学基金会;
关键词
Communication - computation overlap; High performance computing; All-Pairs problems; Parallel computing; MPI;
D O I
10.1007/978-3-319-78024-5_39
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
All-Pairs problems require each data element in a set of N data elements to be paired with every other data element for specific computation using the two data elements. Our framework aims to address recurring problems of scalability, distributing equal work load to all nodes and by reducing memory footprint. We reduce memory footprint of All-Pairs problems, by reducing memory requirement from N/root P to 3N/P. A bio-informatics application is implemented to demonstrate the scalability ranging up to 512 cores for the data set we experimented, redundancy management, and speed up performance of the framework.
引用
收藏
页码:443 / 458
页数:16
相关论文
共 42 条
  • [1] A framework for characterizing overlap of communication and computation in parallel applications
    Shet, Aniruddha G.
    Sadayappan, P.
    Bernholdt, David E.
    Nieplocha, Jarek
    Tipparaju, Vinod
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2008, 11 (01): : 75 - 90
  • [2] A framework for characterizing overlap of communication and computation in parallel applications
    Aniruddha G. Shet
    P. Sadayappan
    David E. Bernholdt
    Jarek Nieplocha
    Vinod Tipparaju
    Cluster Computing, 2008, 11 : 75 - 90
  • [3] Efficient Communication/Computation Overlap with MPI plus OpenMP Runtimes Collaboration
    Sergent, Marc
    Dagrada, Mario
    Carribault, Patrick
    Jaeger, Julien
    Perache, Marc
    Papaure, Guillaume
    EURO-PAR 2018: PARALLEL PROCESSING, 2018, 11014 : 560 - 572
  • [4] Efficient Communication Strategy in Parallel Computation Based on Domain Partitioning
    Matsushita, Yohsuke
    Katayama, Tomoyuki
    Soma, Tatsuya
    Akaotsu, Shota
    Saito, Yasuhiro
    Aoki, Hideyuki
    JOURNAL OF CHEMICAL ENGINEERING OF JAPAN, 2018, 51 (01) : 79 - 82
  • [5] A method for exploiting communication/computation overlap in hypercubes
    de Cerio, LD
    Valero-Garcia, M
    Gonzalez, A
    PARALLEL COMPUTING, 1998, 24 (02) : 221 - 245
  • [6] A methodology for assessing computation/communication overlap of MPI nonblocking collectives
    Denis, Alexandre
    Jaeger, Julien
    Jeannot, Emmanuel
    Reynier, Florian
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2022, 34 (22)
  • [7] A Compiler Transformation to Overlap Communication with Dependent Computation
    Murthy, Karthik
    Mellor-Crummey, John
    2015 9TH INTERNATIONAL CONFERENCE ON PARTITIONED GLOBAL ADDRESS SPACE PROGRAMMING MODELS (PGAS), 2015, : 90 - 92
  • [8] Parallel Computation of Component Trees on Distributed Memory Machines
    Goetz, Markus
    Cavallaro, Gabriele
    Geraud, Thierry
    Book, Matthias
    Riedel, Morris
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2018, 29 (11) : 2582 - 2598
  • [9] Designing Dynamic and Adaptive MPI Point-to-Point Communication Protocols for Efficient Overlap of Computation and Communication
    Subramoni, Hari
    Chakraborty, Sourav
    Panda, Dhabaleswar K.
    HIGH PERFORMANCE COMPUTING (ISC HIGH PERFORMANCE 2017), 2017, 10266 : 334 - 354
  • [10] Effective Communication and Computation Overlap with Hybrid MPI/SMPSs
    Marjanovic, Vladimir
    Labarta, Jesus
    Ayguade, Eduard
    Valero, Mateo
    ACM SIGPLAN NOTICES, 2010, 45 (05) : 337 - 338