Towards efficient execution of MPI applications on the grid: Porting and optimization issues

被引:0
作者
Rainer Keller
Edgar Gabriel
Bettina Krammer
Matthias S. Müller
Michael M. Resch
机构
[1] High Performance Computing Center Stuttgart (HLRS), Stuttgart
[2] InnovativeComputing Laboratory, Computer Science Department, University of Tennessee, Knoxville, TN
来源
Keller, R. (keller@hlrs.de) | 1600年 / Kluwer Academic Publishers卷 / 01期
关键词
Computational grids; Metacomputing; MPI; Optimizations for communication hierarchies; Parallel debugging;
D O I
10.1023/B:GRID.0000024071.12177.91
中图分类号
学科分类号
摘要
The message passing interface (MPI) is a standard used by many parallel scientific applications. It offers the advantage of a smoother migration path for porting applications from high performance computing systems to the Grid. In this paper Grid-enabled tools and libraries for developing MPI applications are presented. The first is MARMOT, a tool that checks the adherence of an application to the MPI standard. The second is PACX-MPI, an implementation of the MPI standard optimized for Grid environments. Besides the efficient development of the program, an optimal execution is of paramount importance for most scientific applications. We therefore discuss not only performance on the level of the MPI library, but also several application specific optimizations, e.g., for a sparse, parallel equation solver and an RNA folding code, like latency hiding, prefetching, caching and topology-aware algorithms. © 2004 Kluwer Academic Publishers.
引用
收藏
页码:133 / 149
页数:16
相关论文
共 38 条
[1]  
Allen G., Dramlitsch T., Foster I., Karonis N.T., Ripeanu M., Seidel E., Toonen B., Supporting Efficient Execution in Heterogeneous Distributed Computing Environments with Cactus and Globus, Proceedings of the 2001 ACM/IEEE Supercomputing Conference (SC 2001, (2001)
[2]  
Baldwin R.L., Rose G.D., Is Protein Folding Hierarchic? II - Local Structure and Peptide Folding, Tibs, 24, pp. 77-83, (1999)
[3]  
Barberou N., Garbey M., Hess M., Resch M., Toivanen J., Rossi T., Tromeur-Dervout D., Aitken-Schwarz Method for Efficient Metacomputing of Elliptic Equations, Proceedings of the Fourteenth Domain Decomposition Meeting, pp. 349-356, (2002)
[4]  
Bernman F., Chien A., Cooper K., Dongarra J., Foster I., Gannon D., Johnsson L., Kennedy K., Kesselman C., Reed L.T.D., Wolski R., The GrADS Project: Software Support for High-Level Grid Application Development, International Journal of High Performance Applications and Supercomputing, 15, 4, pp. 327-344, (2001)
[5]  
Bonisch T.P., Ruhle R., Adaptation of a 3-D Flow-Solver for Use in a Metacomputing Environment, Parallel Computational Fluid Dynamics, Development and Applications of Parallel Technology, pp. 119-125, (1999)
[6]  
Bouteiller A., Cappello F., Herault T., Krawezik G., Lemarinier P., Magniette F., MPICH-V2: A Fault Tolerant MPI for Volatile Nodes based on the Pessimistic Sender Based Message Logging, Proceedings of the 2003 ACM/IEEE Supercomputing Conference (SC 2003, (2003)
[7]  
Brunst H., Winkler M., Nagel W.E., Hoppe H.-C., Performance Optimization for Large Scale Computing: The Scalable VAMPIR Approach, International Conference On Computational Science - ICCS 2001, 2074, 2, pp. 751-760, (2001)
[8]  
Brunst H., Nagel W.E., Hoppe H.-C., Group-Based Performance Analysis of Multithreaded SMP Cluster Applications, Euro-Par 2001 Parallel Processing, pp. 148-153, (2001)
[9]  
Fagg G.E., Dongarra J.J., FT-MPI: Fault Tolerant MPI, Supporting Dynamic Applications in a Dynamic World, Recent Advances In Parallel Virtual Machine and Message Passing Interface, pp. 346-353, (2000)
[10]  
Fagg G.E., London K.S., Dongarra J.J., MPI_Connect: Managing Heterogeneous MPI Applications Interoperation and Process Control, Recent Advances In Parallel Virtual Machine and Message Passing Interface, 1497, pp. 93-96, (1998)