Architecture reduction of a probabilistic neural network by merging k-means and k-nearest neighbour algorithms

被引:10
作者
Kusy, Maciej [1 ]
Kowalski, Piotr A. [2 ]
机构
[1] Rzeszow Univ Technol, Fac Elect & Comp Engn, Al Powstancow Warszawy 12, PL-35959 Rzeszow, Poland
[2] AGH Univ Sci & Technol, Fac Phys & Appl Comp Sci, Al A Mickiewicza 30, PL-30059 Krakow, Poland
关键词
Probabilistic neural network; k-means clustering; k-nearest neighbour; Architecture reduction; Classification; Kernel function; Reduced PNN; CLASSIFICATION; PREDICTION;
D O I
10.1016/j.asoc.2022.109387
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Probabilistic neural network (PNN) has a sizable structure since it requires all training records in the activation of its hidden layer. This fact makes it suffer from the problem of the curse of dimensionality. Therefore, an hypotheses can be easily formulated that in order to manage large data classification tasks, it is recommended to minimise its inner design. In this paper, we directly address this issue: the method for the PNN's architecture reduction is elaborated. It is organised as follows. First, a k-means data clustering is conducted and the obtained centres are stored. Next, one selects a single nearest neighbour to the determined centres considering each class separately. The pattern neurons of a PNN are then established using both (i) the cluster centres and (ii) the records closest to the obtained centroids. The algorithm is applied to the classification tasks of seven repository data sets. The utilised PNN is trained by means of four training techniques with different kernel functions in each case. A 10-fold cross validation method is applied to assess the performance of the original and reduced networks. The obtained results are also compared with those provided by existing methods in the literature. It is shown that in the majority classification cases, it is possible to achieve a higher accuracy of the reduced PNN compared to the original network and the approaches introduced in the literature.(C) 2022 The Author(s). Published by Elsevier B.V.
引用
收藏
页数:14
相关论文
共 46 条
[1]  
[Anonymous], 1989, Learning from delayed rewards: A foundation of reinforcement learning
[2]   NEURONLIKE ADAPTIVE ELEMENTS THAT CAN SOLVE DIFFICULT LEARNING CONTROL-PROBLEMS [J].
BARTO, AG ;
SUTTON, RS ;
ANDERSON, CW .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS, 1983, 13 (05) :834-846
[3]  
Chandra B, 2011, 2011 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), P919, DOI 10.1109/IJCNN.2011.6033320
[4]  
Chang RKY, 2008, INFORM-J COMPUT INFO, V32, P219
[5]   Enhanced probabilistic neural network with data imputation capabilities for machine-fault classification [J].
Chang, Roy Kwang Yang ;
Loo, Chu Kiong ;
Rao, M. V. C. .
NEURAL COMPUTING & APPLICATIONS, 2009, 18 (07) :791-800
[6]   Intelligent Brushing Monitoring Using a Smart Toothbrush with Recurrent Probabilistic Neural Network [J].
Chen, Ching-Han ;
Wang, Chien-Chun ;
Chen, Yan-Zhen .
SENSORS, 2021, 21 (04) :1-18
[7]   Reduction of the size of the learning data in a probabilistic neural network by hierarchical clustering. Application to the discrimination of seeds by artificial vision [J].
Chtioui, Y ;
Bertrand, D ;
Barba, D .
CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS, 1996, 35 (02) :175-186
[8]   NEAREST NEIGHBOR PATTERN CLASSIFICATION [J].
COVER, TM ;
HART, PE .
IEEE TRANSACTIONS ON INFORMATION THEORY, 1967, 13 (01) :21-+
[9]  
Duda Richard O, 2012, PATTERN CLASSIFICATI
[10]   Comparison of neural network predictors in the classification of tracheal-bronchial breath sounds by respiratory auscultation [J].
Folland, R ;
Hines, E ;
Dutta, R ;
Boilot, P ;
Morgan, D .
ARTIFICIAL INTELLIGENCE IN MEDICINE, 2004, 31 (03) :211-220