Pruning Method in The Architecture of Extreme Learning Machines Based on Partial Least Squares Regression

被引:5
作者
Vitor, P. [1 ,2 ]
机构
[1] Fac UNA Betim, Av Gov Valadares,640 Ctr, BR-32510010 Betim, MG, Brazil
[2] CEFET MG, Av Gov Valadares,640 Ctr, BR-32510010 Betim, MG, Brazil
关键词
Extreme Learning Machine; Pattern Classification; Pruning Methods; Partial least squares regression; IMPROVEMENT;
D O I
10.1109/TLA.2018.8804250
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The extreme learning machines-ELM are useful models for performing classification and regression of data, as well as being an alternative methodology to techniques that use back-propagation to determine values of parameters used in hidden layers of the learning model. A problem that ELM may face when performing data mining techniques is to become a very generalist model with no accurate or time-consuming processing due to a high number of neurons in the hidden layers of its architecture, if a model very restricted the characteristics of the sample that was used for the training, losing its generalization capacity, As the number of neurons in the hidden layer increases, information unnecessary to the model can be included in the operations performed by the model, impairing the accuracy end of the ELM when sorting or performing regression data. To solve this problem, a pruning method based on partial least squares regression was proposed to act together with the neurons of the hidden layer of an ELM. By obtaining a ranking vector with regression values and between variables in the hidden layer of ELM, the selection of the neurons that have the best predictive capacity in the responses between the data is used. Data classification tests were performed using bases commonly used in work related to machine learning and compared to other classifier models. Lastly, it is verified that an ELM with fewer neurons in the hidden layer, where the selected neurons are the ones that contribute better to the classification of the model, improve the final accuracy of the model in comparison to an architecture with a much larger number of neurons and is statistically similar to other pruning methods of the neurons in the inner layer of the ELM, allowing gain of model performance without damaging the accuracy in the final results, without losing its ability to perform classification tasks correctly.
引用
收藏
页码:2864 / 2871
页数:8
相关论文
共 36 条
[1]  
[Anonymous], J INTELLIGENT FUZZY
[2]  
[Anonymous], 2018, 10 INT C FORENSIC CO
[3]  
[Anonymous], S BRAS AUT INT NAT R
[4]  
[Anonymous], 1996, MULTIPLE COMP THEORY, DOI DOI 10.1201/B15074
[5]  
[Anonymous], ESTIMATIVA MASSA ESP
[6]  
[Anonymous], 5 IEEE LAT AM C COMP
[7]  
[Anonymous], INTEL ARTIF
[8]  
[Anonymous], 5 C BRAS SIST FUZZ
[9]  
Bach F.R., 2008, P 25 INT C MACH LEAR, P33, DOI DOI 10.1145/1390156.1390161
[10]   Large-Scale Machine Learning with Stochastic Gradient Descent [J].
Bottou, Leon .
COMPSTAT'2010: 19TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL STATISTICS, 2010, :177-186