C-Loss-Based Doubly Regularized Extreme Learning Machine

被引:1
作者
Wu, Qing [1 ]
Fu, Yan-Lin [1 ]
Cui, Dong-Shun [2 ]
Wang, En [3 ]
机构
[1] Xian Univ Posts & Telecommun, Sch Automat, Xian 710121, Peoples R China
[2] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
[3] Xian Shiyou Univ, Sch Marxism, Xian 710065, Peoples R China
基金
中国国家自然科学基金;
关键词
Extreme learning machine; C-loss function; Feature selection; Regularization; REGRESSION;
D O I
10.1007/s12559-022-10050-2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Extreme learning machine has become a significant learning methodology due to its efficiency. However, extreme learning machine may lead to overfitting since it is highly sensitive to outliers. In this paper, a novel extreme learning machine called the C-loss-based doubly regularized extreme learning machine is presented to handle dimensionality reduction and overfitting problems. The proposed algorithm benefits from both L-1 norm and L-2 norm and replaces the square loss function with a C-loss function. And the C-loss-based doubly regularized extreme learning machine can complete the feature selection and the training processes simultaneously. Additionally, it can also decrease noise or irrelevant information of data to reduce dimensionality. To show the efficiency in dimension reduction, we test it on the Swiss Roll dataset and obtain high efficiency and stable performance. The experimental results on different types of artificial datasets and benchmark datasets show that the proposed method achieves much better regression results and faster training speed than other compared methods. Performance analysis also shows it significantly decreases the training time, solves the problem of overfitting, and improves generalization ability.
引用
收藏
页码:496 / 519
页数:24
相关论文
共 36 条
[1]  
Albu F, 2015, EDULEARN PROC, P3229
[2]   Sparse Extreme Learning Machine for Classification [J].
Bai, Zuo ;
Huang, Guang-Bin ;
Wang, Danwei ;
Wang, Han ;
Westover, M. Brandon .
IEEE TRANSACTIONS ON CYBERNETICS, 2014, 44 (10) :1858-1870
[3]   1-Norm extreme learning machine for regression and multiclass classification using Newton method [J].
Balasundaram, S. ;
Gupta, Deepak ;
Kapil .
NEUROCOMPUTING, 2014, 128 :4-14
[4]  
Boyd S.P., 2006, IEEE Transactions on Automatic Control, V51, P1859, DOI DOI 10.1109/TAC.2006.884922
[5]   Elastic-net regularization in learning theory [J].
De Mol, Christine ;
De Vito, Ernesto ;
Rosasco, Lorenzo .
JOURNAL OF COMPLEXITY, 2009, 25 (02) :201-230
[6]  
Demsar J, 2006, J MACH LEARN RES, V7, P1
[7]   Regularized Extreme Learning Machine [J].
Deng, Wanyu ;
Zheng, Qinghua ;
Chen, Lin .
2009 IEEE SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DATA MINING, 2009, :389-395
[8]  
Dinoj, SWISS ROLL DATASETS
[9]   INNOVATIONS APPROACH TO LEAST-SQUARES ESTIMATION .3. NONLINEAR ESTIMATION IN WHITE GAUSSIAN NOISE [J].
FROST, PA ;
KAILATH, T .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 1971, AC16 (03) :217-&
[10]  
Fu Y., 2021, IEEE CVPR, P16654