A CASE STUDY IN PROGRAMMING FOR PARALLEL-PROCESSORS

被引:24
|
作者
ROSENFEL, JL
机构
[1] IBM Thomas J. Watson Research Center, Yorktown Heights, NY
关键词
convergence; electrical network; Gauss-Seidel; Jacobi; multiprocessor; multiprogramming; parallel programming; parallel-processor; parallelism; relaxation; simulation; storage interference; tasking;
D O I
10.1145/363626.363628
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
An affirmative partial answer is provided to the question of whether it is possible to program parallel-processor computing systems to efficiently decrease execution time for useful problems. Parallel-processor systems are multiprocessor systems in which several of the processors can simultaneously execute separate tasks of a single job, thus cooperating to decrease the solution time of a computational problem. The processors have independent instruction counters, meaning that each processor executes its own task program relatively independently of the other processors. Communication between cooperating processors is by means of data in storage shared by all processors. A program for the determination of the distribution of current in an electrical network was written for a parallel-processor computing system, and execution of this program was simulated. The data gathered from simulation runs demonstrate the efficient solution of this problem, typical of a large class of important problems. It is shown that, with proper programming, solution time when NP processors are applied approaches 1/NP times the solution time for a single processor, while improper programming can actually lead to an increase of solution time with the number of processors. Storage interference and other measures of performance are discussed. Stability of the method of solution was also investigated. © 1969 ACM. All rights reserved.
引用
收藏
页码:645 / &
相关论文
共 50 条
  • [21] Teaching parallel programming and building parallel computers
    Liu, H
    Tkachyshyn, O
    McGee, B
    Kissinger, C
    FECS '05: Proceedings of the 2005 International Conference on Frontiers in Education: Computer Science and Computer Engineering, 2005, : 149 - 155
  • [22] Design of Parallel BEM Analyses Framework for SIMD Processors
    Hoshino, Tetsuya
    Ida, Akihiro
    Hanawa, Toshihiro
    Nakajima, Kengo
    COMPUTATIONAL SCIENCE - ICCS 2018, PT I, 2018, 10860 : 601 - 613
  • [23] Parallelism on multicore processors using Parallel.FX
    Marquez, A. L.
    Gil, C.
    Banos, R.
    Gomez, J.
    ADVANCES IN ENGINEERING SOFTWARE, 2011, 42 (05) : 259 - 265
  • [24] Medusa: A Parallel Graph Processing System on Graphics Processors
    Zhong, Jianlong
    He, Bingsheng
    SIGMOD RECORD, 2014, 43 (02) : 35 - 40
  • [25] Fast Algorithms of Anisotropic Diffusion Filters for Parallel Processors
    Kim, Hyun Kyu
    Lee, Hyo Jong
    2012 7TH INTERNATIONAL CONFERENCE ON COMPUTING AND CONVERGENCE TECHNOLOGY (ICCCT2012), 2012, : 1384 - 1389
  • [26] Parallel programming with a pattern language *
    Massingill B.L.
    Mattson T.G.
    Sanders B.A.
    International Journal on Software Tools for Technology Transfer, 2001, 3 (2) : 217 - 234
  • [27] Parallel programming for multimedia applications
    Kalva, Hari
    Colic, Aleksandar
    Garcia, Adriana
    Furht, Borko
    MULTIMEDIA TOOLS AND APPLICATIONS, 2011, 51 (02) : 801 - 818
  • [28] Parallel Programming with Big Operators
    Park, Changhee
    Steele, Guy L., Jr.
    Tristan, Jean-Baptiste
    ACM SIGPLAN NOTICES, 2013, 48 (08) : 293 - 294
  • [29] Object oriented parallel programming
    Abbas, A
    Ahmad, A
    ISCON 2002: IEEE STUDENTS CONFERENCE ON EMERGING TECHNOLOGIES, PROCEEDINGS, 2002, : 89 - 93
  • [30] Parallel Logic Programming: A Sequel
    Dovier, Agostino
    Formisano, Andrea
    Gupta, Gopal
    Hermenegildo, Manuel, V
    Pontelli, Enrico
    Rocha, Ricardo
    THEORY AND PRACTICE OF LOGIC PROGRAMMING, 2022, 22 (06) : 905 - 973