Semisupervised Kernel Matrix Learning by Kernel Propagation

被引:30
作者
Hu, Enliang [1 ]
Chen, Songcan [2 ]
Zhang, Daoqiang [2 ]
Yin, Xuesong [3 ]
机构
[1] Yunnan Normal Univ, Dept Math, Kunming 650092, Peoples R China
[2] Nanjing Univ Aeronaut & Astronaut, Dept Comp Sci & Engn, Nanjing 210016, Peoples R China
[3] Zhejiang Radio & TV Univ, Sch Informat & Engn, Hangzhou 310030, Zhejiang, Peoples R China
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2010年 / 21卷 / 11期
基金
中国国家自然科学基金;
关键词
Kernel propagation; out-of-sample extension; pairwise constraint; seed-kernel matrix learning; semidefinite programming; RANKING;
D O I
10.1109/TNN.2010.2076301
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The goal of semisupervised kernel matrix learning (SS-KML) is to learn a kernel matrix on all the given samples on which just a little supervised information, such as class label or pairwise constraint, is provided. Despite extensive research, the performance of SS-KML still leaves some space for improvement in terms of effectiveness and efficiency. For example, a recent pairwise constraints propagation (PCP) algorithm has formulated SS-KML into a semidefinite programming (SDP) problem, but its computation is very expensive, which undoubtedly restricts PCPs scalability in practice. In this paper, a novel algorithm, called kernel propagation (KP), is proposed to improve the comprehensive performance in SS-KML. The main idea of KP is first to learn a small-sized sub-kernel matrix (named seed-kernel matrix) and then propagate it into a larger-sized full-kernel matrix. Specifically, the implementation of KP consists of three stages: 1) separate the supervised sample (sub) set X-l from the full sample set X; 2) learn a seed-kernel matrix on X-l through solving a small-scale SDP problem; and 3) propagate the learnt seed-kernel matrix into a full-kernel matrix on X. Furthermore, following the idea in KP, we naturally develop two conveniently realizable out-of-sample extensions for KML: one is batch-style extension, and the other is online-style extension. The experiments demonstrate that KP is encouraging in both effectiveness and efficiency compared with three state-of-the-art algorithms and its related out-of-sample extensions are promising too.
引用
收藏
页码:1831 / 1841
页数:11
相关论文
共 28 条
[21]  
Wagstaff K., 2001, ICML, V1, P577, DOI [10.5555/645530.655669, DOI 10.1109/TPAMI.2002.1017616]
[22]  
Wang F., 2006, P 23 INT C MACH LEAR, P985
[23]  
Weinberger K., 2005, P 10 INT WORKSHOP AR, P381
[24]  
Zhang Tong, 2006, Advances in Neural Information Processing Systems 18, P1601
[25]  
Zhou DY, 2004, ADV NEUR IN, V16, P169
[26]  
Zhu X., 2003, P 20 INT C MACH LEAR, P912, DOI DOI 10.1109/18.850663
[27]  
Zhu X., 2006, Semi-Supervised Learning, P277
[28]  
ZHU XJ, 2003, CMUCS03175 DEP COMP