Scalable parallel matrix multiplication on distributed memory parallel computers

被引:8
|
作者
Li, KQ [1 ]
机构
[1] SUNY Albany, Dept Comp Sci, New Paltz, NY 12561 USA
基金
美国国家航空航天局;
关键词
cost optimality; distributed memory parallel computer; linear array with reconfigurable pipelined bus system; matrix multiplication; module parallel computer; optical model of computation; scalability; speedup;
D O I
10.1006/jpdc.2001.1768
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Consider any known sequential algorithm for matrix multiplication over an arbitrary ring with time complexity O(N(alpha)), where 2 < alpha less than or equal to 3. We show that such an algorithm can be parallelized on a distributed memory parallel computer (DMPC) in O(log N) time by using N(alpha)/log N processors. Such a parallel computation is cost optimal and matches the performance of PRAM. Furthermore, our parallelization on a DMPC can be made fully scalable, that is, for all 1 less than or equal to p less than or equal to N(alpha)/log N, multiplying two N x N matrices can be performed by a DMPC with p processors in O(N(alpha)/p) time, i.e., linear speedup and cost optimality can be achieved in the range [ 1..N(alpha)/log N]. This unifies all known algorithms for matrix multiplication on DMPC, standard or nonstandard, sequential or parallel. Extensions of our methods and results to other parallel systems are also presented. For instance, for all 1 less than or equal to p less than or equal to N(alpha)/log N, multiplying two N x N matrices can be performed by p processors connected by a hypercubic network in O(N(alpha)/p + (N(2)/p(2/alpha)) (log p)(2(alpha-1)/alpha)) time, which implies that if p = O(N(alpha)/(log N)(2(alpha-1)/(alpha-2))), linear speedup can be achieved. Such a parallelization is highly scalable. The above claims result in significant progress in scalable parallel matrix multiplication (as well as solving many other important problems) on distributed memory systems, both theoretically and practically. (C) 2001 Academic Press.
引用
收藏
页码:1709 / 1731
页数:23
相关论文
共 50 条
  • [2] New parallel matrix multiplication algorithm on distributed-memory concurrent computers
    Choi, Jaeyoung
    Proceedings of the Conference on High Performance Computing on the Information Superhighway, HPC Asia'97, 1997, : 224 - 229
  • [3] A new parallel matrix multiplication algorithm on distributed-memory concurrent computers
    Choi, J
    HIGH PERFORMANCE COMPUTING ON THE INFORMATION SUPERHIGHWAY - HPC ASIA '97, PROCEEDINGS, 1997, : 224 - 229
  • [4] An Efficient Sparse Matrix-Vector Multiplication on Distributed Memory Parallel Computers
    Shahnaz, Rukhsana
    Usman, Anila
    INTERNATIONAL JOURNAL OF COMPUTER SCIENCE AND NETWORK SECURITY, 2007, 7 (01): : 77 - 82
  • [5] A new parallel matrix multiplication algorithm on distributed-memory concurrent computers
    Choi, J
    CONCURRENCY-PRACTICE AND EXPERIENCE, 1998, 10 (08): : 655 - 670
  • [6] A scalable parallel black oil simulator on distributed memory parallel computers
    Wang, Kun
    Liu, Hui
    Chen, Zhangxin
    JOURNAL OF COMPUTATIONAL PHYSICS, 2015, 301 : 19 - 34
  • [7] PUMMA - PARALLEL UNIVERSAL MATRIX MULTIPLICATION ALGORITHMS ON DISTRIBUTED-MEMORY CONCURRENT COMPUTERS
    CHOI, JY
    DONGARRA, JJ
    WALKER, DW
    CONCURRENCY-PRACTICE AND EXPERIENCE, 1994, 6 (07): : 543 - 570
  • [8] A scalable parallel graph coloring algorithm for distributed memory computers
    Boman, EG
    Bozdag, D
    Catalyurek, U
    Gebremedhin, AH
    Manne, F
    EURO-PAR 2005 PARALLEL PROCESSING, PROCEEDINGS, 2005, 3648 : 241 - 251
  • [9] Blocked-Based Sparse Matrix-Vector Multiplication on Distributed Memory Parallel Computers
    Shahnaz, Rukhsana
    Usman, Anila
    INTERNATIONAL ARAB JOURNAL OF INFORMATION TECHNOLOGY, 2011, 8 (02) : 130 - 136
  • [10] A framework for scalable greedy coloring on distributed-memory parallel computers
    Bozdag, Doruk
    Gebremedhin, Assefaw H.
    Manne, Fredrik
    Boman, Erik G.
    Catalyurek, Umit V.
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2008, 68 (04) : 515 - 535