ANALYSIS OF TRAINING SET PARALLELISM FOR BACKPROPAGATION NEURAL NETWORKS

被引:1
作者
KING, FS [1 ]
SARATCHANDRAN, P [1 ]
SUNDARARAJAN, N [1 ]
机构
[1] NANYANG TECHNOL UNIV,SCH ELECT & ELECTR ENGN,CTR SIGNAL PROC,SINGAPORE,SINGAPORE
关键词
D O I
10.1142/S0129065795000068
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Training set parallelism and network based parallelism are two popular paradigms for parallelizing a feedforward (artificial) neural network. Training set parallelism is particularly suited to feedforward neural networks with backpropagation learning where the size of the training set is large in relation to the size of the network. This paper analyzes training set parallelism for feedforward neural networks when implemented on a transputer array configured in a pipelined ring topology. Theoretical expressions for the time per epoch (iteration) and optimal size of a processor network are derived when the training set is equally distributed among the processing nodes. These show that the speed up is a function of the number of patterns per processor, communication overhead per epoch and the total number of processors in the topology Further analysis of how to optimally distribute the training set on a given processor network when the number of patterns in the training set is not an integer multiple of the number of processors, is also carried out. It is shown that optimal allocation of patterns in such cases is a mixed integer programming problem. Using this analysis it is found that equal distribution of training patterns among the processors is not the optimal way to allocate the patterns even when the training set is an integer multiple of the number of processors. Extension of the analysis to processor networks comprising processors of different speeds is also carried out. Experimental results from a T805 transputer array are presented to verify all the theoretical results.
引用
收藏
页码:61 / 78
页数:18
相关论文
共 24 条
[1]  
AIKEN SW, 1990, P ICNN 90, V2, P611
[2]  
BATTITI R, 1991, ARTIFICIAL NEURAL NE, V2, P1493
[3]   What Size Net Gives Valid Generalization? [J].
Baum, Eric B. ;
Haussler, David .
NEURAL COMPUTATION, 1989, 1 (01) :151-160
[4]  
Cichocki A., 1993, NEURAL NETWORKS OPTI
[5]   TINY - AN EFFICIENT ROUTING HARNESS FOR THE INMOS TRANSPUTER [J].
CLARKE, L ;
WILSON, G .
CONCURRENCY-PRACTICE AND EXPERIENCE, 1991, 3 (03) :221-245
[6]  
EBERHART RC, 1990, NEURAL NETWORK PC TO
[7]  
FAHLMAN SE, 1988, CMUCS88162 CARN MELL
[8]  
FOO SK, 1993, EEECSP9301 NANY TECH
[9]   MASSIVELY PARALLEL ARCHITECTURES FOR LARGE-SCALE NEURAL NETWORK SIMULATIONS [J].
FUJIMOTO, Y ;
FUKUDA, N ;
AKABANE, T .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1992, 3 (06) :876-888
[10]  
KAMANGAR FA, 1990, APPLICATION TRANSPUT, V1, P197