Context Dependent Input Weight Selection for Regression Extreme Learning Machines

被引:0
作者
Rizk, Yara [1 ]
Awad, Mariette [1 ]
机构
[1] Amer Univ Beirut, Dept Elect & Comp Engn, Beirut, Lebanon
来源
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, PT II | 2017年 / 10614卷
关键词
Extreme learning machines; Non-iterative training; Supervised learning; Regression;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Extreme learning machine (ELM) is a popular machine learning algorithm due to its fast non-iterative training and good generalization [2]. However, it randomly assigns input weights and biases from a uniform distribution regardless of the characteristics of the training data. Exploiting this data would produce a more specialized model, instead of adopting a "one size fits all" approach. This could result in better generalization while preserving ELM's fast training. Hence, we developed a context dependent input weight selection for regression ELM (CDR-ELM) which is a non-iterative training algorithm for supervised regression. First, k-means clusters input data into P clusters based on the number of hidden layer neurons. Then, cluster head differences are assigned to input weights as described in (1), and biases are computed from cluster sizes as b(i) = N-j/N-k. Finally, ELM is trained using least squares.
引用
收藏
页码:749 / 750
页数:2
相关论文
共 5 条
[1]   Low-Discrepancy Points for Deterministic Assignment of Hidden Weights in Extreme Learning Machines [J].
Cervellera, Cristiano ;
Maccio, Danilo .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2016, 27 (04) :891-896
[2]  
Huang GB, 2004, IEEE IJCNN, P985
[3]   Class-Constrained Extreme Learning Machine [J].
Liu, Xiao ;
Miao, Jun ;
Qing, Laiyun ;
Cao, Baoxiang .
PROCEEDINGS OF ELM-2015, VOL 1: THEORY, ALGORITHMS AND APPLICATIONS (I), 2016, 6 :521-530
[4]  
Tapson J., 2015, P ELM 2014, V1, P41, DOI DOI 10.1007/978-3-319-14063-6_
[5]  
Zhu WT, 2014, IEEE IJCNN, P800, DOI 10.1109/IJCNN.2014.6889761