Progressive graph-based subspace transductive learning for semi-supervised classification

被引:3
作者
Chen, Long [1 ,2 ]
Zhong, Zhi [1 ,2 ]
机构
[1] Guangxi Normal Univ, Guangxi Key Lab Multisource Informat Min & Secur, Guilin 541004, Peoples R China
[2] Nanning Normal Univ, Sch Comp & Informat Engn, Nanning 530000, Peoples R China
关键词
pattern classification; learning (artificial intelligence); graph theory; matrix algebra; iterative methods; optimisation; label domain; feature domain; noise points; feature information; label information; PGSTL; feature-to-label alignment; representative relation matrix; feature relationships; feature affinity matrix; high-dimensional feature space; semisupervised classification; efficient semisupervised learning technique; sufficient labelled samples; fixed subject-wise graph; progressive graph-based subspace transductive learning; iterative optimisation strategy; SCENE CLASSIFICATION; LABEL PROPAGATION; FEATURE-SELECTION; CONVEX;
D O I
10.1049/iet-ipr.2018.6363
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph-based transductive learning (GTL) is the efficient semi-supervised learning technique which is always employed in that sufficient labeled samples can not be obtained. Conventional GTL methods generally construct a inaccurate graph in feature domain and they are not able to align feature information with label information. To address these issues, we propose an approach called Progressive Graph-based subspace transductive learning (PGSTL) in this paper. PGSTL gradually find the intrinsic relationship between samples that more accurately aligns feature with label. Meanwhile, PGSTL develops a feature affinity matrix in the subspace of original high-dimensional feature space, which effectively reduce the interference of noise points. And then, the representative relation matrix and the feature affinity matrix are optimized by iterative optimization strategy and finally aligned. In this way, PGSTL can not only effectively reduce the interference of noisy points, but also comprehensively consider the information in the feature and label domain of data. Extensive experimental results on various benchmark datasets demonstrate that the PGSTL achieves the best performance compared to some state-of-the-art semi-supervised learning methods.
引用
收藏
页码:2753 / 2762
页数:10
相关论文
共 58 条
[1]  
[Anonymous], REGULARIZATION SEMIS
[2]   Semi-supervised learning on Riemannian manifolds [J].
Belkin, M ;
Niyogi, P .
MACHINE LEARNING, 2004, 56 (1-3) :209-239
[3]  
Belkin M, 2006, J MACH LEARN RES, V7, P2399
[4]  
Blum Avrim, 2001, P 18 INT C MACH LEAR, P19, DOI 10.1184/R1/6606860.v1
[5]   Heterogeneous Image Features Integration via Multi-Modal Semi-Supervised Learning Model [J].
Cai, Xiao ;
Nie, Feiping ;
Cai, Weidong ;
Huang, Heng .
2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, :1737-1744
[6]  
Chapelle O., 2002, ADV NEURAL INFORM PR, P601
[7]  
Chapelle O., 2009, Semi-Supervised Learning, V20, P542, DOI 10.1109/TNN.2009.2015974
[8]   Learning With l1-Graph for Image Analysis [J].
Cheng, Bin ;
Yang, Jianchao ;
Yan, Shuicheng ;
Fu, Yun ;
Huang, Thomas S. .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2010, 19 (04) :858-866
[9]  
de Sousa Celso Andre R., 2013, Machine Learning and Knowledge Discovery in Databases. European Conference, ECML PKDD 2013. Proceedings: LNCS 8190, P160, DOI 10.1007/978-3-642-40994-3_11
[10]   Sparse regularization for semi-supervised classification [J].
Fan, Mingyu ;
Gu, Nannan ;
Qiao, Hong ;
Zhang, Bo .
PATTERN RECOGNITION, 2011, 44 (08) :1777-1784