Semi-Supervised Classification based on Local Sparse Representation

被引:0
作者
Yao, Guangjun [1 ]
Wang, Jun [1 ]
机构
[1] College of Computer and Information Science, Southwest University, Chongqing
来源
Journal of Computational Information Systems | 2015年 / 11卷 / 20期
基金
中国国家自然科学基金;
关键词
Graph structure; k nearest neighbors; Semi-Supervised; Sparse representation;
D O I
10.12733/jcis13842
中图分类号
学科分类号
摘要
The performance of graph-based semi-supervised classification depends on the structure of graph. However, how to construct a graph that correctly reflects the sample distribution is a non-trivial job, because of the noisy samples and features, especially for high-dimensional data. Sparse representation has been used to construct graphs for graph-based semi-supervised learning, which is shown to be robust to noisy samples and features, but it often asks for huge computational resources. In this paper, to address these problems associated with applying sparse representation for graph-based semi-supervised classification, we propose a method coined as Semi-Supervised Classification based on Local Sparse Representation (SSC-LSR in short). SSC-LSR first utilizes k nearest neighbors of a sample to compute the reconstruction weights, instead of the whole samples. Then, these weights are adopted to define a graph. Next, it incorporates this graph into the widely used Gaussian Random Filed and Harmonic Functions to classify unlabeled samples. Experimental results on face datasets demonstrate that the proposed method not only has higher classification accuracy than traditional methods, but also is robust to the input parameters. Copyright © 2015 Binary Information Press.
引用
收藏
页码:7255 / 7262
页数:7
相关论文
共 18 条
[1]  
Zhu X., Semi-supervised Learning Literature Survey, (2007)
[2]  
Yu G., Zhang G., Domeniconi C., Yu Z., You J., Semi-supervised classification based on random subspace dimensionality reduction, Pattern Recognition, 45, 3, pp. 1119-1135, (2012)
[3]  
Zhu X., Ghahramani Z., Lafferty J., Semi-supervised learning using gaussian fields and harmonic functions, Proceedings of International Conference on Machine Learning, pp. 912-919, (2003)
[4]  
Zhou D., Bousquet O., Lal T., Weston J., Scholkopf B., Learning with local and global consistency, Proceedings of Advances in Neural Information Processing Systems, pp. 321-328, (2004)
[5]  
Belkin M., Niyogi P., Sindhwani V., Manifold regularization: a geometric framework for learning from labeled and unlabeled examples, Journal of Machine Learning Research, 7, pp. 2399-2434, (2006)
[6]  
Liu W., Chang S., Robust multi-class transductive learning with graphs, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 381-388, (2009)
[7]  
Yu G., Zhang G., Yu Z., Domeniconi C., You J., Semi-supervised ensemble classification in sub-spaces, Applied Soft Computing, 12, 5, pp. 1511-1522, (2012)
[8]  
Samaria F., Harter A., Parameterisation of a stochastic model for human face identification, Proceedings of the second IEEE Workshop on Applications of Computer Vision, pp. 138-142, (1994)
[9]  
Wang J., Wang F., Zhang C., Shen H., Quan L., Linear neighborhood propagation and its applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, 31, 9, pp. 1600-1615, (2009)
[10]  
Yang J., Zhang L., Xu Y., Yang J., Beyond sparsity: the role of ι<sub>1</sub>-optimizer in pattern classification, Pattern Recognition, 45, 3, pp. 1104-1118, (2012)