Time efficient variants of Twin Extreme Learning Machine

被引:4
作者
Anand, Pritam [1 ]
Bharti, Amisha [2 ]
Rastogi, Reshma [3 ]
机构
[1] Dhirubhai Ambani Inst Informat & Commun Technol, Gandhinagar 382007, India
[2] Jawaharlal Nehru Univ, Sch Comp & Syst Sci, Delhi 110067, India
[3] South Asian Univ, Dept Comp Sci, New Delhi 110021, India
来源
INTELLIGENT SYSTEMS WITH APPLICATIONS | 2023年 / 17卷
关键词
Classification; Extreme Learning Machine; Twin Support Vector Machine; Twin Extreme Learning Machine;
D O I
10.1016/j.iswa.2022.200169
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Twin Extreme Learning Machine models can obtain better generalization ability than the standard Extreme Learning Machine model. But, they require to solve a pair of quadratic programming problems for this. It makes them more complex and computationally expensive than the standard Extreme Learning Machine model. In this paper, we propose two novel time-efficient formulations of the Twin Extreme Learning Machine, which only require the solution of systems of linear equations for obtaining the final classifier. In this sense, they can combine the benefits of the Twin Support Vector Machine and standard Extreme Learning Machine in the true sense. We term our first formulation as 'Least Squared Twin Extreme Learning Machine'. It minimizes the L 2-norm of error variables in its optimization problem. Our second formulation 'Weighted Linear loss Twin Extreme Learning Machine' uses the weighted linear loss function for calculating the empirical error, which makes it insensitive towards outliers. Numerical results obtained with multiple benchmark datasets show that proposed formulations are time efficient with better generalization ability. Further, we have used the proposed formulations in the detection of phishing websites and shown that they are much more effective in the detection of phishing websites than other Extreme Learning Machine models.
引用
收藏
页数:15
相关论文
共 37 条
[11]   Extreme learning machine: Theory and applications [J].
Huang, Guang-Bin ;
Zhu, Qin-Yu ;
Siew, Chee-Kheong .
NEUROCOMPUTING, 2006, 70 (1-3) :489-501
[12]   Universal approximation using incremental constructive feedforward networks with random hidden nodes [J].
Huang, Guang-Bin ;
Chen, Lei ;
Siew, Chee-Kheong .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2006, 17 (04) :879-892
[13]   Local Receptive Fields Based Extreme Learning Machine [J].
Huang, Guang-Bin ;
Bai, Zuo ;
Lekamalage, Liyanaarachchi ;
Kasun, Chamara ;
Vong, Chi Man .
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2015, 10 (02) :18-29
[14]   Extreme Learning Machine for Regression and Multiclass Classification [J].
Huang, Guang-Bin ;
Zhou, Hongming ;
Ding, Xiaojian ;
Zhang, Rui .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2012, 42 (02) :513-529
[15]   Optimization method based extreme learning machine for classification [J].
Huang, Guang-Bin ;
Ding, Xiaojian ;
Zhou, Hongming .
NEUROCOMPUTING, 2010, 74 (1-3) :155-163
[16]  
Huber P.J., 2004, ROBUST STAT, V523
[17]  
Jajadeva, 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence, V29, P905
[18]  
Jajadeva Khemchandani R, 2017, Twin support vector machines: Models, extensions and applications
[19]  
Kohavi R., 1995, IJCAI-95. Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence, P1137
[20]   Laplacian twin extreme learning machine for semi-supervised classification [J].
Li, Shuang ;
Song, Shiji ;
Wan, Yihe .
NEUROCOMPUTING, 2018, 321 :17-27