Robust activation function and its application: Semi-supervised kernel extreme learning method

被引:32
作者
Liu, Shenglan [1 ,2 ]
Feng, Lin [1 ,2 ]
Xiao, Yao [1 ,2 ]
Wang, Huibing [2 ]
机构
[1] Dalian Univ Technol, Fac Elect Informat & Elect Engn, Sch Comp Sci & Technol, Dalian 116024, Peoples R China
[2] Dalian Univ Technol, Sch Innovat Expt, Dalian 116024, Peoples R China
关键词
Semi-supervised classification; Extreme Learning Machine; Robust activation function; Kernel method; NONLINEAR DIMENSIONALITY REDUCTION; MACHINE; REGRESSION;
D O I
10.1016/j.neucom.2014.04.041
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Semi-supervised learning is a hot topic in the field of pattern recognition, this paper analyzes an effective classification algorithm - Extreme Learning Machine (ELM). ELM has been widely used in the applications of pattern recognition and data mining for its extremely fast training speed and highly recognition rate. But in most of real-world applications, there are irregular distributions and outlier problems which lower the classification rate of ELM (kernel ELM). This is mainly because: (1) Overfitting caused by outliers and unreasonable selections of activation function and kernel function and (2) the labeled sample size is small and we do not making full use of the information of unlabeled data either. Against problem one, this paper proposes a robust activation function (RAF) based on analyzing several different activation functions in-depth. RAF keeps the output of activation function away from zero as much as possible and minimizes the impacts of outliers to the algorithm. Thus, it improves the performance of ELM (kernel ELM); simultaneously, RAF can be applied to other kernel methods and a neural network learning algorithm. Against problem two, we propose a semi-supervised kernel ELM (SK-ELM). Experimental results on synthetic and real-world datasets demonstrate that RAF and SK-ELM outperform the ELM which use other activation functions and semi-supervised (kernel) ELM methods. (C) 2014 Elsevier B.V. All rights reserved.
引用
收藏
页码:318 / 328
页数:11
相关论文
共 18 条
[1]  
Belkin M, 2004, 2004 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL III, PROCEEDINGS, P1000
[2]   Regularized Extreme Learning Machine [J].
Deng, Wanyu ;
Zheng, Qinghua ;
Chen, Lin .
2009 IEEE SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DATA MINING, 2009, :389-395
[3]   Maximal Similarity Embedding [J].
Feng, Lin ;
Liu, Sheng-lan ;
Wu, Zhen-yu ;
Jin, Bo .
NEUROCOMPUTING, 2013, 99 :423-438
[4]   Robust extreme learning machine [J].
Horata, Punyaphol ;
Chiewchanwattana, Sirapat ;
Sunat, Khamron .
NEUROCOMPUTING, 2013, 102 :31-44
[5]  
Huang G., IEEE T CYBERN
[6]  
Huang GB, 2004, I C CONT AUTOMAT ROB, P1029
[7]  
Huang GB., 2005, INT J INF TECHNOL, V11, P16
[8]   Convex incremental extreme learning machine [J].
Huang, Guang-Bin ;
Chen, Lei .
NEUROCOMPUTING, 2007, 70 (16-18) :3056-3062
[9]   Extreme learning machine: Theory and applications [J].
Huang, Guang-Bin ;
Zhu, Qin-Yu ;
Siew, Chee-Kheong .
NEUROCOMPUTING, 2006, 70 (1-3) :489-501
[10]   Universal approximation using incremental constructive feedforward networks with random hidden nodes [J].
Huang, Guang-Bin ;
Chen, Lei ;
Siew, Chee-Kheong .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2006, 17 (04) :879-892