Joint low-rank representation and spectral regression for robust subspace learning

被引:5
作者
Peng, Yong [1 ,2 ,3 ]
Zhang, Leijie [1 ]
Kong, Wanzeng [1 ,2 ]
Qin, Feiwei [1 ]
Zhang, Jianhai [1 ,2 ]
机构
[1] Hangzhou Dianzi Univ, Sch Comp Sci & Technol, Hangzhou 310018, Peoples R China
[2] Key Lab Brain Machine Collaborat Intelligence Zhe, Hangzhou 310018, Peoples R China
[3] Soochow Univ, Prov Key Lab Comp Informat Proc Technol, Suzhou 215123, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Low-rank representation; Subspace learning; Spectral regression; Joint learning; Robustness; NONLINEAR DIMENSIONALITY REDUCTION; PRESERVING PROJECTIONS; SPARSE; GRAPH; ALGORITHM;
D O I
10.1016/j.knosys.2020.105723
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Subspace learning represents a group of algorithms to project high dimensional data onto a low dimensional subspace in order to retain the desirable properties and simultaneously reduce the dimensionality. In graph-based subspace learning algorithms, the quality of graph affects the performance of projection matrix learning a lot. Especially when data is noisy or even grossly corrupted, we cannot guarantee that the constructed graph can accurately depict the inner structure of data. Additionally, the widely used two-stage paradigm of learning the projection matrix on a given graph isolates the connection between these two stages. In this paper, we propose a general framework to get rid of these disadvantages by the inspiration of low-rank matrix recovery and spectral regression-based subspace learning. Concretely, by jointly optimizing the objectives of low-rank representation and spectral regression, on one hand we can automatically get the recovered data from noise and the graph affinity matrix and on the other hand we can perform the projection matrix learning. The formulated joint low-rank and subspace learning (JLRSL) framework can be efficiently optimized by the augmented Lagrange multiplier method. We evaluate the performance of JLRSL by conducting extensive experiments on representative benchmark data sets and the results show that the low-rank learning can greatly facilitate the process of subspace learning, leading to robust feature extraction. Moreover, comparison with the state-of-the-arts is performed. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页数:14
相关论文
共 53 条
[1]  
[Anonymous], 2012, MATRIX COMPUTATIONS
[2]  
[Anonymous], APPL INTELL
[3]  
[Anonymous], 2010, arXiv:1009.5055
[4]  
[Anonymous], 2011, Advances in Neural Information Processing Systems, DOI DOI 10.5555/2986459.2986528
[5]  
[Anonymous], 2011, P ADV NEUR INF PROC
[6]  
[Anonymous], 2004, NIPS
[7]   Laplacian eigenmaps for dimensionality reduction and data representation [J].
Belkin, M ;
Niyogi, P .
NEURAL COMPUTATION, 2003, 15 (06) :1373-1396
[8]   Reinforced Robust Principal Component Pursuit [J].
Brahma, Pratik Prabhanjan ;
She, Yiyuan ;
Li, Shijie ;
Li, Jiade ;
Wu, Dapeng .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (05) :1525-1538
[9]  
Cai D, 2007, IEEE I CONF COMP VIS, P222
[10]   A SINGULAR VALUE THRESHOLDING ALGORITHM FOR MATRIX COMPLETION [J].
Cai, Jian-Feng ;
Candes, Emmanuel J. ;
Shen, Zuowei .
SIAM JOURNAL ON OPTIMIZATION, 2010, 20 (04) :1956-1982