Sample selection-based hierarchical extreme learning machine

被引:4
作者
Xu, Xinzheng [1 ,2 ,3 ]
Li, Shan [1 ,2 ]
Liang, Tianming [1 ]
Sun, Tongfeng [1 ,3 ]
机构
[1] China Univ Min & Technol, Sch Comp Sci & Technol, Xuzhou 221116, Jiangsu, Peoples R China
[2] Minist Educ, Engn Res Ctr Min Digital, Xuzhou 221116, Jiangsu, Peoples R China
[3] Lanzhou Jiaotong Univ, Key Lab Optotechnol & Intelligent Control, Minist Educ, Lanzhou 730070, Gansu, Peoples R China
关键词
Sample selection; Fuzzy C-means clustering; Condensed nearest neighbour; Hierarchical extreme learning machine; ELM; ALGORITHM; DEEP;
D O I
10.1016/j.neucom.2019.10.013
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large amounts of training data in machine learning can keep the accuracy high to a certain extent, but the time costs are high because of the exorbitant amount of data and their dimensionality. Therefore, how to simultaneously select the most useful training data set and extract the main features of the samples, especially for image data, are essential problems that urgently need to be solved in the field of large-scale machine learning. Herein, a training sample selection method that is based on the fuzzy c-means clustering algorithm (FCM) is proposed for the problems. It first utilises condensed nearest neighbour (CNN) to make a preliminary selection of training samples. Then, it utilises the FCM to get the centres of the selected data, and, finally, it effectively condenses the sample using a compression parameter. Meanwhile, considering the critical influence of the sample features on the classification model, this paper selects the hierarchical extreme learning machine (H-ELM) model to better solve the classification task. Based on this, the paper presents the FCM-CNN-H-ELM framework for data classification, which combines FCM-Based CNN and H-ELM. The results of the experiments show that the proposed training sample selection method and classification framework can guarantee consistent, even higher, prediction results with a small number of training samples, and significantly reduce the training time. (C) 2019 Elsevier B.V. All rights reserved.
引用
收藏
页码:95 / 102
页数:8
相关论文
共 45 条
[1]  
Angiulli F, 2005, P 22 INT C MACH LEAR, P25, DOI 10.1145/1102351.1102355
[2]  
[Anonymous], 2008, P 25 INT C MACHINE L
[3]  
[Anonymous], [No title captured]
[4]  
[Anonymous], 2015, 2015 INT JOINT C NEU
[5]  
[Anonymous], [No title captured]
[6]  
[Anonymous], 2011, Proceedings of the 5th International Conference on Ubiquitous Information Management and Communication, DOI [DOI 10.1145/1968613.1968619, 10.1145/1968613.1968619]
[7]   Anomaly-Based Intrusion Detection Using Extreme Learning Machine and Aggregation of Network Traffic Statistics in Probability Space [J].
Atli, Buse Gul ;
Miche, Yoan ;
Kalliola, Aapo ;
Oliver, Ian ;
Holtmanns, Silke ;
Lendasse, Amaury .
COGNITIVE COMPUTATION, 2018, 10 (05) :848-863
[8]   Sparse Extreme Learning Machine for Classification [J].
Bai, Zuo ;
Huang, Guang-Bin ;
Wang, Danwei ;
Wang, Han ;
Westover, M. Brandon .
IEEE TRANSACTIONS ON CYBERNETICS, 2014, 44 (10) :1858-1870
[9]  
Beluco A, 2017, J RISK FINANC MANAG, V10, DOI 10.3390/jrfm10010006
[10]   Selection of relevant features and examples in machine learning [J].
Blum, AL ;
Langley, P .
ARTIFICIAL INTELLIGENCE, 1997, 97 (1-2) :245-271