Analysis and test of efficient methods for building recursive deterministic perceptron neural networks

被引:11
作者
Elizondo, David A. [1 ]
Birkenhead, Ralph [1 ]
Gogoraa, Mario [1 ]
Taillard, Eric [2 ]
Luyima, Patrick [1 ]
机构
[1] De Montfort Univ, Fac Comp Sci & Engn, Ctr Comp Intelligence, Leicester LE1 9BH, Leics, England
[2] Univ Appl Sci Western Switzerland, EIVD, CH-1401 Yverdon, Switzerland
关键词
recursive deterministic perceptron; batch learning; incremental learning; modular learning; performance sensitivity analysis; convergence time; generalisation; topology;
D O I
10.1016/j.neunet.2007.07.009
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The Recursive Deterministic Perceptron (RDP) feed-forward multilayer neural network is a generalisation of the single layer perceptron topology. This model is capable of solving any two-class classification problem as opposed to the single layer perceptron which can only solve classification problems dealing with linearly separable sets. For all classification problems, the construction of an RDP is done automatically and convergence is always guaranteed. Three methods for constructing RDP neural networks exist: Batch, Incremental, and Modular. The Batch method has been extensively tested and it has been shown to produce results comparable with those obtained with other neural network methods such as Back Propagation, Cascade Correlation, Rulex, and Ruleneg. However, no testing has been done before on the Incremental and Modular methods. Contrary to the Batch method, the complexity of these two methods is not NP-Complete. For the first time, a study on the three methods is presented. This study will allow the highlighting of the main advantages and disadvantages of each of these methods by comparing the results obtained while building RDP neural networks with the three methods in terms of the convergence time, the level of generalisation, and the topology size. The networks were trained and tested using the following standard benchmark classification datasets: IRIS, SOYBEAN, and Wisconsin Breast Cancer. The results obtained show the effectiveness of the Incremental and the Modular methods which are as good as that of the NP-Complete Batch method but with a much lower complexity level. The results obtained with the RDP are comparable to those obtained with the backpropagation and the Cascade Correlation algorithms. (c) 2007 Elsevier Ltd. All rights reserved.
引用
收藏
页码:1095 / 1108
页数:14
相关论文
共 26 条
[1]  
[Anonymous], 1990, Report No
[2]  
ATIYA A, 2005, IEEE T NEURAL NETWOR, V16, P781
[3]  
BENNETT KP, 1992, OPTIMIZATION METHODS, V1, P23, DOI DOI 10.1080/10556789208805504
[4]  
Blake C.L., 1998, UCI repository of machine learning databases
[5]  
Boser B, 1992, P 5 ANN WORKSHOP COM, V1, P37
[6]   A novel kernel method for clustering [J].
Camastra, F ;
Verri, A .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2005, 27 (05) :801-U4
[7]  
CORTES C, 1995, MACH LEARN, V20, P273, DOI 10.1023/A:1022627411411
[8]  
Cristianini N., 2003, INTRO SUPPORT VECTOR, VI
[10]  
Dreo J., 2006, METAHEURISTICS HARD