A novel robust kernel for classifying high-dimensional data using Support Vector Machines

被引:49
作者
Hussain, Syed Fawad [1 ]
机构
[1] Ghulam Ishaq Khan Inst Engn Sci & Technol, Machine Learning & Data Sci MDS Lab, Fac Comp Sci & Engn, Topi, Pakistan
关键词
Semantic kernels; Support Vector Machines; Co-clustering; Label noise; TEXT CLASSIFICATION; CLASSIFIERS; ALGORITHM;
D O I
10.1016/j.eswa.2019.04.037
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a new semantic kernel for classification of high-dimensional data in the framework of Support Vector Machines (SVM). SVMs have gained widespread application due to their relatively higher accuracy. The efficacy of SVMs, however, depends upon the separation of the data itself as well as the kernel function. Text data, for instance, is difficult to classify due to synonymy and polysemy in its contents, having multi-topical instances that can result in mislabeling, and being highly sparse in the bag-of-words representation. While the soft margin parameter and kernel tricks are used in SVM to deal with outliers and non-linearly separable data, using data statistics and correlation has not been fully explored in the literature. This paper explore the use co-similarity (i.e., soft co-clustering) to find latent relationships between documents motivated by the success of co-clustering and subspace clustering methods. It has been shown that the use of weighted higher-order paths between instances in the data can be a good measure of similarity values which can then be used for both classification and to correct mislabeled (or outlier) data in the training set. The proposed kernel is generic in nature and suitable for sparse, dyadic data where direct co-occurrences are not necessary common as in the case of textual data, link-analysis in social media networks, co-authorship, etc. It also studies the impact of noise in the training data and provides a technique to re-label such instances. It is also observed that re-labelling of selected training data reduces the adverse effect of outliers or label noise and can greatly improve the classification of the test data. To the best of our knowledge, we are the first to introduce a supervised co-similarity based kernel function and also provide mathematical formulation to show that it is a valid Mercer's kernel. Our experiments show that the proposed framework outperforms current and state-of-the-art methods in terms of classification accuracy and is more resilient to label noise. (C) 2019 Elsevier Ltd. All rights reserved.
引用
收藏
页码:116 / 131
页数:16
相关论文
共 47 条
[1]   A corpus-based semantic kernel for text classification by using meaning values of terms [J].
Altinel, Berna ;
Ganiz, Murat Can ;
Diri, Banu .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2015, 43 :54-66
[2]  
Altinel B, 2014, LECT NOTES ARTIF INT, V8467, P505, DOI 10.1007/978-3-319-07173-2_43
[3]   Towards enriching the quality of k-nearest neighbor rule for document classification [J].
Basu, Tanmay ;
Murthy, C. A. .
INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2014, 5 (06) :897-905
[4]  
Bijalwan Vishwanath, 2014, Int J Database Theory Appl, V7, P61, DOI [DOI 10.14257/IJDTA.2014.7.1.06, 10.14257/ijdta.2014.7.1.0]
[5]   Co-clustering of Multi-View Datasets: a Parallelizable Approach [J].
Bisson, Gilles ;
Grimal, Clement .
12TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2012), 2012, :828-833
[6]   χ-Sim: A New Similarity Measure for the Co-clustering Task [J].
Bisson, Gilles ;
Hussain, Fawad .
SEVENTH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, PROCEEDINGS, 2008, :211-217
[7]  
Bloehdorn S, 2006, IEEE DATA MINING, P808
[8]  
Chakraborti S, 2006, LECT NOTES COMPUT SC, V3936, P510
[9]   A feature weighted support vector machine and K-nearest neighbor algorithm for stock market indices prediction [J].
Chen, Yingjun ;
Hao, Yongtao .
EXPERT SYSTEMS WITH APPLICATIONS, 2017, 80 :340-355
[10]   SUPPORT-VECTOR NETWORKS [J].
CORTES, C ;
VAPNIK, V .
MACHINE LEARNING, 1995, 20 (03) :273-297