Feature selection for least squares projection twin support vector machine

被引:26
作者
Guo, Jianhui [1 ,5 ]
Yi, Ping [2 ]
Wang, Ruili [3 ]
Ye, Qiaolin [4 ]
Zhao, Chunxia [1 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing, Jiangsu, Peoples R China
[2] Southeast Univ, Sch Instrument Sci & Engn, Nanjing, Jiangsu, Peoples R China
[3] Massey Univ, Sch Engn & Adv Technol, Auckland, New Zealand
[4] Nanjing Forestry Univ, Sch Informat Technol, Nanjing, Jiangsu, Peoples R China
[5] Nanjing Univ Sci & Technol, Jiangsu Key Lab Image & Video Understanding Socia, Nanjing, Jiangsu, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Twin Support Vector Machine; Least Squares Projection Twin Support; Vector Machine; Feature selection; SVM;
D O I
10.1016/j.neucom.2014.05.040
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose a new feature selection approach for the recently proposed Least Squares Projection Twin Support Vector Machine (LSPTSVM) for binary classification. 1-norm is used in our feature selection objective so that only non-zero elements in weight vectors will be chosen as selected features. Also, the Tikhonov regularization term is incorporated to the objective of our approach to reduce the singularity problems of Quadratic Programming Problems (QPPs), and then to minimize its 1-norm measure. This approach leads to a strong feature suppression capability, called as Feature Selection for Least Squares Projection Twin Support Vector Machine (FLSPTSVM). The solutions of FLSPTSVM can be obtained by solving two smaller QPPS arising from two primal QPPs as opposed to two dual ones in Twin Support Vector Machine (TWSVM). Thus, FLSPTSVM is capable of generating sparse solutions. This means that FLSPTSVM can reduce the number of input features for a linear case. Our linear FLSPTSVM can also be extended to a nonlinear case with the kernel trick. When a nonlinear classifier is used, the number of kernel functions required for the classifier is reduced. Our experiments on publicly available datasets demonstrate that our FLSPTSVM has comparable classification accuracy to that of LSPTSVM and obtains sparse solutions. (C) 2014 Elsevier B.V. All rights reserved.
引用
收藏
页码:174 / 183
页数:10
相关论文
共 36 条
[21]   Twin SVM for Gesture Classification Using the Surface Electromyogram [J].
Naik, Ganesh R. ;
Kumar, Dinesh Kant ;
Jayadeva .
IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, 2010, 14 (02) :301-308
[22]   Training support vector machines: an application to face detection [J].
Osuna, E ;
Freund, R ;
Girosi, F .
1997 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, PROCEEDINGS, 1997, :130-136
[23]   Bi-density twin support vector machines for pattern recognition [J].
Peng, Xinjun ;
Xu, Dong .
NEUROCOMPUTING, 2013, 99 :134-143
[24]   Least squares recursive projection twin support vector machine for classification [J].
Shao, Yuan-Hai ;
Deng, Nai-Yang ;
Yang, Zhi-Min .
PATTERN RECOGNITION, 2012, 45 (06) :2299-2307
[25]   Improvements on Twin Support Vector Machines [J].
Shao, Yuan-Hai ;
Zhang, Chun-Hua ;
Wang, Xiao-Bo ;
Deng, Nai-Yang .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2011, 22 (06) :962-968
[26]   Quotient vs. difference: Comparison between the two discriminant criteria [J].
Tao, Yuting ;
Yang, Jian .
NEUROCOMPUTING, 2010, 73 (10-12) :1808-1817
[27]  
Vapnik V., 1995, The nature of statistical learning theory
[28]  
Vapnik V., 1998, Statistical Learning Theory, P5
[29]   Fast prediction of protein-protein interaction sites based on Extreme Learning Machines [J].
Wang, Debby A. ;
Wang, Ran ;
Yan, Hong .
NEUROCOMPUTING, 2014, 128 :258-266
[30]   Localized twin SVM via convex minimization [J].
Ye, Qiaolin ;
Zhao, Chunxia ;
Ye, Ning ;
Chen, Xiaobo .
NEUROCOMPUTING, 2011, 74 (04) :580-587