Semi-Supervised SVM With Extended Hidden Features

被引:32
作者
Dong, Aimei [1 ,2 ]
Chung, Fu-Lai [3 ]
Deng, Zhaohong [1 ]
Wang, Shitong [1 ]
机构
[1] Jiangnan Univ, Sch Digital Media, Wuxi 214122, Peoples R China
[2] Qilu Univ Technol, Sch Informat, Jinan 250353, Peoples R China
[3] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Hidden features; integrated squared error between probability distributions; maximum margin; semisupervised learning (SSL); support vector machine (SVM); FUZZY SYSTEM; FRAMEWORK;
D O I
10.1109/TCYB.2015.2493161
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Many traditional semi-supervised learning algorithms not only train on the labeled samples but also incorporate the unlabeled samples in the training sets through an automated labeling process such as manifold preserving. If some labeled samples are falsely labeled, the automated labeling process will generally propagate negative impact on the classifier in quite a serious manner. In order to avoid such an error propagating effect, the unlabeled samples should not be directly incorporated into the training sets during the automated labeling strategy. In this paper, a new semi-supervised support vector machine with extended hidden features (SSVM-EHF) is presented to address this issue. According to the maximum margin principle and the minimum integrated squared error between the probability distributions of the labeled and unlabeled samples, the dimensionality of the labeled and unlabeled samples is extended through an orthonormal transformation to generate the corresponding hidden features shared by the labeled and unlabeled samples. After doing so, the last step in the process of training of SSVM-EHF is done only on the labeled samples with their original and hidden features, and the unlabeled samples are no longer explicitly used. Experimental results confirm the effectiveness of the proposed method.
引用
收藏
页码:2924 / 2937
页数:14
相关论文
共 34 条
[1]  
Ando R. K., 2007, P 24 INT C MACH LEAR, P25, DOI DOI 10.1145/1273496.1273500
[2]  
Ando RK, 2005, J MACH LEARN RES, V6, P1817
[3]  
[Anonymous], 2006, BOOK REV IEEE T NEUR
[4]  
[Anonymous], 2011, Proc. International Conference on Machine Learning
[5]  
[Anonymous], 2003, P 20 INT C MACH LEAR
[6]  
[Anonymous], 1995, ACL, DOI 10.3115/981658.981684
[7]  
Baluja Shumeet., 1998, NIPS, P854
[8]  
Belkin M, 2006, J MACH LEARN RES, V7, P2399
[9]  
Bennett KP, 1999, ADV NEUR IN, V11, P368
[10]  
Blum A., 1998, Proceedings of the Eleventh Annual Conference on Computational Learning Theory, P92, DOI 10.1145/279943.279962