A comparison of OpenMP and MPI for neural network simulations on a SunFire 6800

被引:0
作者
Strey, A [1 ]
机构
[1] Univ Ulm, Dept Neural Informat Proc, D-89069 Ulm, Germany
来源
PARALLEL COMPUTING: SOFTWARE TECHNOLOGY, ALGORITHMS, ARCHITECTURES AND APPLICATIONS | 2004年 / 13卷
关键词
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
This paper discusses several possibilities for the parallel implementation of a two-layer artificial neural network on a Symmetric Multiprocessor (SMP). Thread-parallel implementations based on OpenMP and process-parallel implementations based on the MPI communication library are compared. Different data and work partitioning strategies are investigated and the performance of all implementations is evaluated on a SunFire 6800.
引用
收藏
页码:201 / 208
页数:8
相关论文
共 9 条
[1]  
BANE M, 2000, P CRAY US GROUP SUMM
[2]  
Bull J.M., 1999, P 1 EUR WORKSH OPENM, P99
[3]  
CHARLESWORTH AE, 2001, P SUPERCOMPUTING
[4]   SPEEDING UP BACKPROPAGATION TRAINING ON A HYPERCUBE COMPUTER [J].
KERCKHOFFS, EJH ;
WEDMAN, FW ;
FRIETMAN, EEE .
NEUROCOMPUTING, 1992, 4 (1-2) :43-63
[5]  
Krawezik G., 2003, P 15 ANN ACM S PAR A, P118
[6]   NETWORKS FOR APPROXIMATION AND LEARNING [J].
POGGIO, T ;
GIROSI, F .
PROCEEDINGS OF THE IEEE, 1990, 78 (09) :1481-1497
[7]  
POMERLEAU DA, 1988, P IEEE INT C NEURAL, P143
[8]   Three learning phases for radial-basis-function networks [J].
Schwenker, F ;
Kestler, HA ;
Palm, G .
NEURAL NETWORKS, 2001, 14 (4-5) :439-458
[9]   Simulating artificial neural networks on parallel architectures [J].
Serbedzija, NB .
COMPUTER, 1996, 29 (03) :56-&