A comparison of the capabilities of PVM, MPI and JAVA']JAVA for distributed parallel processing

被引:0
|
作者
Eggen, R [1 ]
Eggen, M [1 ]
机构
[1] Univ N Florida, Jacksonville, FL 32224 USA
来源
INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED PROCESSING TECHNIQUES AND APPLICATIONS, VOLS I-IV, PROCEEDINGS | 1998年
关键词
parallel and distributed processing; PVM; MPI; !text type='JAVA']JAVA[!/text;
D O I
暂无
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Networked Unix workstations as well as workstations based on Windows 95 and Windows NT are fast becoming the standard for computing environments in many universities and research sites. In order to harness the tremendous potential for computing capability represented by these networks of workstations many new (and not so new) tools are being developed Parallel Virtual Machine (PVM) and Message Passing interface (MPI) have existed on Unix workstations for some time, and are maturing in their capability for handling Distributed Parallel Processing (DPP). Recently, however, JAVA, with all of its followers, has began to make an impact in the DPP arena as well. This paper will explore each of these three vehicles for DPP considering capability, ease of use, and availability. We will explore the programmer interface as well as explore their utility in solving real world parallel processing applications. We will show that each has its advantages. The bottom line is that programming a distributed cluster of workstations is challenging, worthwhile and fun!
引用
收藏
页码:237 / 241
页数:5
相关论文
共 49 条
  • [31] MPR: An MPI Framework for Distributed Self-adaptive Stream Processing
    Loff, Junior
    Griebler, Dalvan
    Fernandes, Luiz Gustavo
    Binder, Walter
    EURO-PAR 2024: PARALLEL PROCESSING, PT III, EURO-PAR 2024, 2024, 14803 : 400 - 414
  • [32] Hybrid MPI/OpenMP parallel asynchronous distributed alternating direction method of multipliers
    Dongxia Wang
    Yongmei Lei
    Jianhui Zhou
    Computing, 2021, 103 : 2737 - 2762
  • [33] IMPLEMENTATION OF THE DISTRIBUTED PARALLEL PROGRAM FOR GEOID HEIGHTS COMPUTATION USING MPI AND OPENMP
    Lee, Seongkyu
    Kim, Jinsoo
    Jung, Yonghwa
    Choi, Jisun
    Choi, Chuluong
    XXII ISPRS CONGRESS, TECHNICAL COMMISSION IV, 2012, 39-B4 : 225 - 229
  • [34] Hybrid MPI/OpenMP parallel asynchronous distributed alternating direction method of multipliers
    Wang, Dongxia
    Lei, Yongmei
    Zhou, Jianhui
    COMPUTING, 2021, 103 (12) : 2737 - 2762
  • [35] Modules to teach parallel and distributed computing using MPI for Python']Python and Disco
    Ortiz-Ubarri, Jose
    Arce-Nazario, Rafael
    Orozco, Edusmildo
    2016 IEEE 30TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS (IPDPSW), 2016, : 958 - 962
  • [36] A hierarchical distributed-shared memory parallel Branch&Bound application with PVM and OpenMP for multiprocessor clusters
    Aversa, R
    Di Martino, B
    Mazzocca, N
    Venticinque, S
    PARALLEL COMPUTING, 2005, 31 (10-12) : 1034 - 1047
  • [37] Multistep Scheduling Algorithm for Parallel and Distributed Processing with Communication Costs
    Yamazaki, Hitoshi
    Konishi, Katumi
    Shin, Seiichi
    Sawada, Kenji
    39TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY (IECON 2013), 2013, : 4482 - 4487
  • [38] Jobcast - Parallel and distributed processing framework Data processing on a cloud style KVS database
    Nakagawa, Ikuo
    Nagami, Kenichi
    2012 IEEE/IPSJ 12TH INTERNATIONAL SYMPOSIUM ON APPLICATIONS AND THE INTERNET (SAINT), 2012, : 123 - 128
  • [39] PARALLEL SIMULATION OF A FLUID FLOW BY MEANS OF THE SPH METHOD: OPENMP VS. MPI COMPARISON
    Wroblewski, Pawel
    Boryczko, Krzysztof
    COMPUTING AND INFORMATICS, 2009, 28 (01) : 139 - 150
  • [40] Optimized parallel simulations of analytic bond-order potentials on hybrid shared/distributed memory with MPI and OpenMP
    Teijeiro, Carlos
    Hammerschmidt, Thomas
    Drautz, Ralf
    Sutmann, Godehard
    INTERNATIONAL JOURNAL OF HIGH PERFORMANCE COMPUTING APPLICATIONS, 2019, 33 (02) : 227 - 241