A Selection Metric for semi-supervised learning based on neighborhood construction

被引:16
作者
Emadi, Mona [1 ]
Tanha, Jafar [1 ]
Shiri, Mohammad Ebrahim [2 ,4 ]
Aghdam, Mehdi Hosseinzadeh [3 ]
机构
[1] Univ Tabriz, Comp & Elect Engn Dept, Tabriz, Iran
[2] Islamic Azad Univ, Dept Comp Engn, Borujerd Branch, Borujerd, Iran
[3] Univ Bonab, Dept Comp Engn, Bonab, Iran
[4] Univ AmirKabir, Dept Comp Sci, Tehran, Iran
关键词
Apollonius circle; Semi-supervised classification; Self-training; Support vector machine; Neighborhood construction;
D O I
10.1016/j.ipm.2020.102444
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The present paper focuses on semi-supervised classification problems. Semi-supervised learning is a learning task through both labeled and unlabeled samples. One of the main issues in semi supervised learning is to use a proper selection metric for sampling from the unlabeled data in order to extract informative unlabeled data points. This is indeed vital for the semi-supervised self-training algorithms. Most self-training algorithms employ the probability estimations of the underlying base learners to select high-confidence predictions, which are not always useful for improving the decision boundary. In this study, a novel self-training algorithm is proposed based on a new selection metric using a neighborhood construction algorithm. We select unlabeled data points that are close to the decision boundary. Although these points are not high-confidence based on the probability estimation of the underlying base learner, they are more effective for finding an optimal decision boundary. To assign the correct labels to these data points, we propose an agreement between the classifier predictions and the neighborhood construction algorithm. The proposed approach uses a neighborhood construction algorithm employing peak data points and an Apollonius circle for sampling from unlabeled data. The algorithm then finds the agreement between the classifier predictions and the neighborhood construction algorithm to assign labels to unlabeled data at each iteration of the training process. The experimental results demonstrate that the proposed algorithm can effectively improve the performance of the constructed classification model.
引用
收藏
页数:24
相关论文
共 44 条
[1]   Help-Training for semi-supervised support vector machines [J].
Adankon, Mathias M. ;
Cheriet, Mohamed .
PATTERN RECOGNITION, 2011, 44 (09) :2220-2230
[2]   Rumor detection in Arabic tweets using semi-supervised and unsupervised expectation-maximization [J].
Alzanin, Samah M. ;
Azmi, Aqil M. .
KNOWLEDGE-BASED SYSTEMS, 2019, 185
[3]  
Belkin M, 2006, J MACH LEARN RES, V7, P2399
[4]  
Chapelle O., 2006, P 23 INT C MACH LEAR, P185
[5]  
Chapelle Olivier, 2009, Handbook on Neural Information Processing [C], V20, P542, DOI [DOI 10.1109/TNN.2009.2015974, 10.1109/TNN.2009.2015974]
[6]   Search task success evaluation by exploiting multi-view active semi-supervised learning [J].
Chen, Ling ;
Fan, Alin ;
Shi, Hongyu ;
Chen, Gencai .
INFORMATION PROCESSING & MANAGEMENT, 2020, 57 (02)
[7]   Weighted samples based semi-supervised classification [J].
Chen, Xia ;
Yu, Guoxian ;
Tan, Qiaoyu ;
Wang, Jun .
APPLIED SOFT COMPUTING, 2019, 79 :46-58
[8]  
Demsar J, 2006, J MACH LEARN RES, V7, P1
[9]  
Ding S., 2017, OVERVIEW SEMI SUPERV, P969
[10]   Self-Trained LMT for Semisupervised Learning [J].
Fazakis, Nikos ;
Karlos, Stamatis ;
Kotsiantis, Sotiris ;
Sgarbas, Kyriakos .
COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2016, 2016