Scalable parallel matrix multiplication on distributed memory parallel computers

被引:8
|
作者
Li, KQ [1 ]
机构
[1] SUNY Albany, Dept Comp Sci, New Paltz, NY 12561 USA
基金
美国国家航空航天局;
关键词
cost optimality; distributed memory parallel computer; linear array with reconfigurable pipelined bus system; matrix multiplication; module parallel computer; optical model of computation; scalability; speedup;
D O I
10.1006/jpdc.2001.1768
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Consider any known sequential algorithm for matrix multiplication over an arbitrary ring with time complexity O(N(alpha)), where 2 < alpha less than or equal to 3. We show that such an algorithm can be parallelized on a distributed memory parallel computer (DMPC) in O(log N) time by using N(alpha)/log N processors. Such a parallel computation is cost optimal and matches the performance of PRAM. Furthermore, our parallelization on a DMPC can be made fully scalable, that is, for all 1 less than or equal to p less than or equal to N(alpha)/log N, multiplying two N x N matrices can be performed by a DMPC with p processors in O(N(alpha)/p) time, i.e., linear speedup and cost optimality can be achieved in the range [ 1..N(alpha)/log N]. This unifies all known algorithms for matrix multiplication on DMPC, standard or nonstandard, sequential or parallel. Extensions of our methods and results to other parallel systems are also presented. For instance, for all 1 less than or equal to p less than or equal to N(alpha)/log N, multiplying two N x N matrices can be performed by p processors connected by a hypercubic network in O(N(alpha)/p + (N(2)/p(2/alpha)) (log p)(2(alpha-1)/alpha)) time, which implies that if p = O(N(alpha)/(log N)(2(alpha-1)/(alpha-2))), linear speedup can be achieved. Such a parallelization is highly scalable. The above claims result in significant progress in scalable parallel matrix multiplication (as well as solving many other important problems) on distributed memory systems, both theoretically and practically. (C) 2001 Academic Press.
引用
收藏
页码:1709 / 1731
页数:23
相关论文
共 50 条
  • [31] Applying Architectural Patterns for Parallel Programming: Solving a Matrix Multiplication
    Ortega-Arjona, Jorge L.
    PROCEEDINGS OF THE EUROPEAN CONFERENCE ON PATTERN LANGUAGES OF PROGRAMS 2021, EUROPLOP 2021, 2021,
  • [32] HPMaX: heterogeneous parallel matrix multiplication using CPUs and GPUs
    Kang, Homin
    Kwon, Hyuck Chan
    Kim, Duksu
    COMPUTING, 2020, 102 (12) : 2607 - 2631
  • [33] PARALLEL COMPUTING OF MATRIX MULTIPLICATION IN OPEN MP SUPPORTED CODEBLOCKS
    Singh, Hari
    Chander, Dinesh
    Bhatt, Ravindara
    ADVANCES AND APPLICATIONS IN MATHEMATICAL SCIENCES, 2019, 18 (08): : 775 - 787
  • [34] SOME COMBINATORIAL ASPECTS OF PARALLEL ALGORITHM DESIGN FOR MATRIX MULTIPLICATION
    TSAY, JC
    SY, Y
    IEEE TRANSACTIONS ON COMPUTERS, 1992, 41 (03) : 355 - 361
  • [35] Communication lower bounds for distributed-memory matrix multiplication
    Irony, D
    Toledo, S
    Tiskin, A
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2004, 64 (09) : 1017 - 1026
  • [36] High-performance radix-2, 3 and 5 parallel 1-D complex FFT algorithms for distributed-memory parallel computers
    Takahashi, D
    Kanada, Y
    JOURNAL OF SUPERCOMPUTING, 2000, 15 (02) : 207 - 228
  • [37] Performance Model for Parallel Matrix Multiplication with Dryad: Dataflow Graph Runtime
    Li, Hui
    Fox, Geoffrey
    Qiu, Judy
    SECOND INTERNATIONAL CONFERENCE ON CLOUD AND GREEN COMPUTING / SECOND INTERNATIONAL CONFERENCE ON SOCIAL COMPUTING AND ITS APPLICATIONS (CGC/SCA 2012), 2012, : 675 - 683
  • [38] High-Performance Radix-2, 3 and 5 Parallel 1-D Complex FFT Algorithms for Distributed-Memory Parallel Computers
    Daisuke Takahashi
    Yasumasa Kanada
    The Journal of Supercomputing, 2000, 15 : 207 - 228
  • [39] A parallel algorithm for matrix multiplication of compressed Z order data structures
    Scott, K
    PROCEEDINGS OF THE ISCA 20TH INTERNATIONAL CONFERENCE ON COMPUTERS AND THEIR APPLICATIONS, 2005, : 453 - 458
  • [40] Combining building blocks for parallel multi-level matrix multiplication
    Hunold, S.
    Rauber, T.
    Ruenger, G.
    PARALLEL COMPUTING, 2008, 34 (6-8) : 411 - 426