CST-Voting: A semi-supervised ensemble method for classification problems

被引:10
作者
Kostopoulos, G. [1 ]
Livieris, I. E. [2 ]
Kotsiantis, S. [1 ]
Tampakas, V [2 ]
机构
[1] Univ Patras, Dept Math, Educ Software Dev Lab, GR-26500 Patras, Greece
[2] Technol Educ Inst Western Greece, Dept Comp & Informat Engn, DISK Lab, Patras, Greece
关键词
Semi-supervised learning; classification; voting; ensemble methods; accuracy; TESTS;
D O I
10.3233/JIFS-169571
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Semi-supervised learning is an emerging subfield of machine learning, with a view to building efficient classifiers exploiting a limited pool of labeled data together with a large pool of unlabeled ones. Most of the studies regarding semisupervised learning deal with classification problems, whose goal is to learn a function that maps an unlabeled instance into a finite number of classes. In this paper, a newsemi-supervised classification algorithm, which is based on a voting methodology, is proposed. The term attributed to this ensemble method is called CST-Voting. Ensemble methods have been effectively applied in various scientific fields and often perform better than the individual classifiers from which they are originated. The efficiency of the proposed algorithm is compared to three familiar semi-supervised learning methods on a plethora of benchmark datasets using three representative supervised classifiers as base learners. Experimental results demonstrate the predominance of the proposed method, outperforming classical semi-supervised classification algorithms as illustrated from the accuracy measurements and confirmed by the Friedman Aligned Ranks nonparametric test.
引用
收藏
页码:99 / 109
页数:11
相关论文
共 42 条
[1]  
Aha D., 1997, Lazy learning
[2]  
Alcalá-Fdez J, 2011, J MULT-VALUED LOG S, V17, P255
[3]  
[Anonymous], 2016, MORGAN KAUFMANN
[4]  
[Anonymous], 1994, Neural Networks: A Comprehensive Foundation
[5]  
[Anonymous], 1995, ACL
[6]  
[Anonymous], THESIS U WISCONSIN M
[7]  
Bishop C.M., 1995, Neural networks for pattern recognition
[8]  
Blum A., 1998, Proceedings of the Eleventh Annual Conference on Computational Learning Theory, P92, DOI 10.1145/279943.279962
[9]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[10]   A tutorial on Support Vector Machines for pattern recognition [J].
Burges, CJC .
DATA MINING AND KNOWLEDGE DISCOVERY, 1998, 2 (02) :121-167