Time efficient variants of Twin Extreme Learning Machine

被引:4
作者
Anand, Pritam [1 ]
Bharti, Amisha [2 ]
Rastogi, Reshma [3 ]
机构
[1] Dhirubhai Ambani Inst Informat & Commun Technol, Gandhinagar 382007, India
[2] Jawaharlal Nehru Univ, Sch Comp & Syst Sci, Delhi 110067, India
[3] South Asian Univ, Dept Comp Sci, New Delhi 110021, India
来源
INTELLIGENT SYSTEMS WITH APPLICATIONS | 2023年 / 17卷
关键词
Classification; Extreme Learning Machine; Twin Support Vector Machine; Twin Extreme Learning Machine;
D O I
10.1016/j.iswa.2022.200169
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Twin Extreme Learning Machine models can obtain better generalization ability than the standard Extreme Learning Machine model. But, they require to solve a pair of quadratic programming problems for this. It makes them more complex and computationally expensive than the standard Extreme Learning Machine model. In this paper, we propose two novel time-efficient formulations of the Twin Extreme Learning Machine, which only require the solution of systems of linear equations for obtaining the final classifier. In this sense, they can combine the benefits of the Twin Support Vector Machine and standard Extreme Learning Machine in the true sense. We term our first formulation as 'Least Squared Twin Extreme Learning Machine'. It minimizes the L 2-norm of error variables in its optimization problem. Our second formulation 'Weighted Linear loss Twin Extreme Learning Machine' uses the weighted linear loss function for calculating the empirical error, which makes it insensitive towards outliers. Numerical results obtained with multiple benchmark datasets show that proposed formulations are time efficient with better generalization ability. Further, we have used the proposed formulations in the detection of phishing websites and shown that they are much more effective in the detection of phishing websites than other Extreme Learning Machine models.
引用
收藏
页数:15
相关论文
共 37 条
[1]  
Alcalá-Fdez J, 2011, J MULT-VALUED LOG S, V17, P255
[2]  
CORTES C, 1995, MACH LEARN, V20, P273, DOI 10.1023/A:1022627411411
[3]  
Demsar J, 2006, J MACH LEARN RES, V7, P1
[4]  
Dua D., 2019, UCI MACHINE LEARNING
[5]   The Moore-Penrose generalized inverse for sums of matrices [J].
Fill, JA ;
Fishkind, DE .
SIAM JOURNAL ON MATRIX ANALYSIS AND APPLICATIONS, 2000, 21 (02) :629-635
[6]  
Frenay B., 2010, ESANN. Citeseer
[7]   Predicting seminal quality with artificial intelligence methods [J].
Gil, David ;
Luis Girela, Jose ;
De Juan, Joaquin ;
Jose Gomez-Torres, M. ;
Johnsson, Magnus .
EXPERT SYSTEMS WITH APPLICATIONS, 2012, 39 (16) :12564-12573
[8]   ANALYSIS OF HIDDEN UNITS IN A LAYERED NETWORK TRAINED TO CLASSIFY SONAR TARGETS [J].
GORMAN, RP ;
SEJNOWSKI, TJ .
NEURAL NETWORKS, 1988, 1 (01) :75-89
[9]   A comparison of methods for multiclass support vector machines [J].
Hsu, CW ;
Lin, CJ .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2002, 13 (02) :415-425
[10]   Convex incremental extreme learning machine [J].
Huang, Guang-Bin ;
Chen, Lei .
NEUROCOMPUTING, 2007, 70 (16-18) :3056-3062