Sparse graph-based transduction for image classification

被引:3
作者
Huang, Sheng [1 ,2 ]
Yang, Dan [1 ,2 ,3 ]
Zhou, Jia [1 ]
Huangfu, Lunwen [4 ]
Zhang, Xiaohong [2 ,3 ]
机构
[1] Chongqing Univ, Coll Comp Sci, Chongqing 400044, Peoples R China
[2] Minist Educ, Key Lab Dependable Serv Comp Cyber Phys Soc, Chongqing 400044, Peoples R China
[3] Chongqing Univ, Sch Software Engn, Chongqing 400044, Peoples R China
[4] Univ Arizona, Eller Coll Management, Tucson, AZ 85721 USA
关键词
image classification; sparse representation; graph learning; transductive learning; semisupervised learning; FACE-RECOGNITION; REPRESENTATION;
D O I
10.1117/1.JEI.24.2.023007
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Motivated by the remarkable successes of graph-based transduction (GT) and sparse representation (SR), we present a classifier named sparse graph-based classifier (SGC) for image classification. In SGC, SR is leveraged to measure the correlation (similarity) of every two samples and a graph is constructed for encoding these correlations. Then the Laplacian eigenmapping is adopted for deriving the graph Laplacian of the graph. Finally, SGC can be obtained by plugging the graph Laplacian into the conventional GT framework. In the image classification procedure, SGC utilizes the correlations which are encoded in the learned graph Laplacian, to infer the labels of unlabeled images. SGC inherits the merits of both GT and SR. Compared to SR, SGC improves the robustness and the discriminating power of GT. Compared to GT, SGC sufficiently exploits the whole data. Therefore, it alleviates the undercomplete dictionary issue suffered by SR. Four popular image databases are employed for evaluation. The results demonstrate that SGC can achieve a promising performance in comparison with the state-of-the-art classifiers, particularly in the small training sample size case and the noisy sample case. (C) 2015 SPIE and IS&T
引用
收藏
页数:9
相关论文
共 39 条
[1]  
Agarwal S, 2002, LECT NOTES COMPUT SC, V2353, P113
[2]   On the approximability of minimizing nonzero variables or unsatisfied relations in linear systems [J].
Amaldi, E ;
Kann, V .
THEORETICAL COMPUTER SCIENCE, 1998, 209 (1-2) :237-260
[3]  
[Anonymous], 2010, INT C MACH LEARN
[4]  
[Anonymous], 2009, Ariz. State Univ.
[5]  
[Anonymous], 2007, Advances in Neural Information Processing Systems
[6]   Laplacian eigenmaps for dimensionality reduction and data representation [J].
Belkin, M ;
Niyogi, P .
NEURAL COMPUTATION, 2003, 15 (06) :1373-1396
[7]  
Bergamo Alessandro., 2011, NIPS, P2088
[8]   LIBSVM: A Library for Support Vector Machines [J].
Chang, Chih-Chung ;
Lin, Chih-Jen .
ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2011, 2 (03)
[9]   Learning With l1-Graph for Image Analysis [J].
Cheng, Bin ;
Yang, Jianchao ;
Yan, Shuicheng ;
Fu, Yun ;
Huang, Thomas S. .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2010, 19 (04) :858-866
[10]   For most large underdetermined systems of linear equations the minimal l1-norm solution is also the sparsest solution [J].
Donoho, DL .
COMMUNICATIONS ON PURE AND APPLIED MATHEMATICS, 2006, 59 (06) :797-829