Semisupervised Learning Using Negative Labels

被引:16
作者
Hou, Chenping [1 ]
Nie, Feiping [2 ]
Wang, Fei [3 ]
Zhang, Changshui [2 ]
Wu, Yi [1 ]
机构
[1] Natl Univ Def Technol, Dept Math & Syst Sci, Changsha 410076, Hunan, Peoples R China
[2] Tsinghua Univ, Tsinghua Natl Lab Informat Sci & Technol, Dept Automat, State Key Lab Intelligent Technol & Syst, Beijing 100084, Peoples R China
[3] IBM Thomas J Watson Res Ctr, Healthcare Transformat Grp, Hawthorne, NY 10532 USA
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2011年 / 22卷 / 03期
基金
美国国家科学基金会;
关键词
Label propagation; negative labels; pattern classification; semisupervised learning; DIMENSIONALITY REDUCTION;
D O I
10.1109/TNN.2010.2099237
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The problem of semisupervised learning has aroused considerable research interests in the past few years. Most of these methods aim to learn from a partially labeled dataset, i.e., they assume that the exact labels of some data are already known. In this paper, we propose to use a novel type of supervision information to guide the process of semisupervised learning, which indicates whether a point does not belong to a specific category. We call this kind of information negative label (NL) and propose a novel approach called NL propagation (NLP) to efficiently make use of this type of information to assist the process of semisupervised learning. Specifically, NLP assumes that nearby points should have similar class indicators. The data labels are propagated under the guidance of NL information and the geometric structure revealed by both labeled and unlabeled points, by employing some specified initialization and parameter matrices. The convergence analysis, out-of-sample extension, parameter determination, computational complexity, and relations to other approaches are presented. We also interpret the proposed approach within the framework of regularization. Promising experimental results on image, digit, spoken letter, and text classification tasks are provided to show the effectiveness of our method.
引用
收藏
页码:420 / 432
页数:13
相关论文
共 35 条
[1]  
[Anonymous], 2003, ADV NEURAL INFORM PR
[2]  
[Anonymous], 2005, AISTAT
[3]  
[Anonymous], 1530 U WISC MAD DEP
[4]  
[Anonymous], 2007, NIPS 19
[5]   Laplacian eigenmaps for dimensionality reduction and data representation [J].
Belkin, M ;
Niyogi, P .
NEURAL COMPUTATION, 2003, 15 (06) :1373-1396
[6]  
Belkin M, 2006, J MACH LEARN RES, V7, P2399
[7]  
Cai D., 2006, UIUCDCSR20062749
[8]  
Fan RE, 2008, J MACH LEARN RES, V9, P1871
[9]  
Golub G. H., 1996, MATRIX COMPUTATIONS
[10]   Semisupervised SVM Batch Mode Active Learning with Applications to Image Retrieval [J].
Hoi, Steven C. H. ;
Jin, Rong ;
Zhu, Jianke ;
Lyu, Michael R. .
ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2009, 27 (03)