Hidden neuron pruning of multilayer perceptrons using a quantified sensitivity measure

被引:69
作者
Zeng, XQ [1 ]
Yeung, DS
机构
[1] Hohai Univ, Dept Comp Sci & Engn, Nanjing, Peoples R China
[2] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Hong Kong, Peoples R China
关键词
neural network; multilayer perceptron; neuron pruning; sensitivity measure; relevance measure;
D O I
10.1016/j.neucom.2005.04.010
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the design of neural networks, how to choose the proper size of a network for a given task is an important and practical problem. One popular approach to tackling this problem is to start with an oversized network and then prune it to a smaller size so as to achieve less computational complexity and better performance in generalization. This paper presents a pruning technique, by means of a quantified sensitivity measure, to remove as many neurons as possible, those with the least relevance, from the hidden layer of a multilayer perceptron (MLP). The sensitivity of an individual neuron is defined as the expectation of its output deviation due to expected input deviation with respect to overall inputs from a continuous interval, and the relevance of the neuron is defined as the multiplication of its sensitivity value by the summation of the absolute values of its outgoing weights. The basic idea for introducing such a relevance measure is that a neuron with less relevance ought to have less effect on its succeeding neurons and thus contribute less to the entire network. The pruning is performed by iteratively training a network to a certain performance criterion and then removing the hidden neuron with the lowest relevance value until no one can further be removed. The pruning technique is novel in its quantified sensitivity measure and so is its relevance measure. Experimental results demonstrate the effectiveness of the pruning technique. (c) 2005 Elsevier B.V. All rights reserved.
引用
收藏
页码:825 / 837
页数:13
相关论文
共 23 条
[1]  
BURRASCANO P, 1993, P 1993 INT JOINT C N, V1, P347
[2]   An iterative pruning algorithm for feedforward neural networks [J].
Castellano, G ;
Fanelli, AM ;
Pelillo, M .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1997, 8 (03) :519-531
[3]   A new pruning heuristic based on variance analysis of sensitivity information [J].
Engelbrecht, AP .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2001, 12 (06) :1386-1399
[4]  
Engelbrecht AP, 1996, IEEE IJCNN, P1274, DOI 10.1109/ICNN.1996.549081
[5]  
Gorodkin J, 1993, Int J Neural Syst, V4, P159, DOI 10.1142/S0129065793000146
[6]   A SIMPLE AND EFFECTIVE METHOD FOR REMOVAL OF HIDDEN UNITS AND WEIGHTS [J].
HAGIWARA, M .
NEUROCOMPUTING, 1994, 6 (02) :207-218
[7]  
HASSIBI B, 1993, 1993 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS, VOLS 1-3, P293, DOI 10.1109/ICNN.1993.298572
[8]  
Hassibi Babak, 1992, NIPS
[9]  
Hecht-Nielsen R., 1989, IJCNN: International Joint Conference on Neural Networks (Cat. No.89CH2765-6), P593, DOI 10.1109/IJCNN.1989.118638
[10]   MULTILAYER FEEDFORWARD NETWORKS ARE UNIVERSAL APPROXIMATORS [J].
HORNIK, K ;
STINCHCOMBE, M ;
WHITE, H .
NEURAL NETWORKS, 1989, 2 (05) :359-366