Semi-supervised classification based on subspace sparse representation

被引:29
作者
Yu, Guoxian [1 ,2 ]
Zhang, Guoji [3 ]
Zhang, Zili [4 ]
Yu, Zhiwen [2 ]
Deng, Lin [5 ]
机构
[1] Southwest Univ, Coll Comp & Informat Sci, Chongqing 400715, Peoples R China
[2] S China Univ Technol, Sch Comp Sci & Engn, Guangzhou 510006, Guangdong, Peoples R China
[3] S China Univ Technol, Sch Sci, Guangzhou 510640, Guangdong, Peoples R China
[4] Deakin Univ, Sch Informat Technol, Geelong, Vic 3220, Australia
[5] George Mason Univ, Dept Comp Sci, Fairfax, VA 22030 USA
关键词
Semi-supervised classification; High-dimensional data; Graph construction; Subspaces sparse representation; FACE RECOGNITION; ILLUMINATION; FRAMEWORK;
D O I
10.1007/s10115-013-0702-2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph plays an important role in graph-based semi-supervised classification. However, due to noisy and redundant features in high-dimensional data, it is not a trivial job to construct a well-structured graph on high-dimensional samples. In this paper, we take advantage of sparse representation in random subspaces for graph construction and propose a method called Semi-Supervised Classification based on Subspace Sparse Representation, SSC-SSR in short. SSC-SSR first generates several random subspaces from the original space and then seeks sparse representation coefficients in these subspaces. Next, it trains semi-supervised linear classifiers on graphs that are constructed by these coefficients. Finally, it combines these classifiers into an ensemble classifier by minimizing a linear regression problem. Unlike traditional graph-based semi-supervised classification methods, the graphs of SSC-SSR are data-driven instead of man-made in advance. Empirical study on face images classification tasks demonstrates that SSC-SSR not only has superior recognition performance with respect to competitive methods, but also has wide ranges of effective input parameters.
引用
收藏
页码:81 / 101
页数:21
相关论文
共 40 条
[1]  
[Anonymous], P SIAM INT C DAT MIN
[2]  
Aviyente S., 2006, ADV NEURAL INFORM PR, P609, DOI DOI 10.7551/MITPRESS/7503.001.0001
[3]  
Belkin M, 2006, J MACH LEARN RES, V7, P2399
[4]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[5]  
Brown G., 2005, Information Fusion, V6, P5, DOI 10.1016/j.inffus.2004.04.004
[6]   A new LDA-based face recognition system which can solve the small sample size problem [J].
Chen, LF ;
Liao, HYM ;
Ko, MT ;
Lin, JC ;
Yu, GJ .
PATTERN RECOGNITION, 2000, 33 (10) :1713-1726
[7]  
Chung F. R, 1997, SPECTRAL GRAPH THEOR, DOI DOI 10.1090/CBMS/092
[8]   Compressed sensing [J].
Donoho, DL .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2006, 52 (04) :1289-1306
[9]  
Duda R.O., 2001, Pattern Classification, V2nd
[10]  
Elhamifar Ehsan, 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), P2790, DOI 10.1109/CVPRW.2009.5206547