Online Low-Rank Representation Learning for Joint Multi-Subspace Recovery and Clustering

被引:23
作者
Li, Bo [1 ,2 ]
Liu, Risheng [3 ,4 ,5 ]
Cao, Junjie [6 ]
Zhang, Jie [7 ]
Lai, Yu-Kun [8 ]
Liu, Xiuping [6 ]
机构
[1] Nanchang Hangkong Univ, Sch Math & Informat Sci, Nanchang 330063, Jiangxi, Peoples R China
[2] Guilin Univ Elect Technol, Sch Comp Sci & Informat Secur, Guilin 541004, Peoples R China
[3] Dalian Univ Technol, DUT RU Int Sch Informat Sci & Engn, Dalian 116620, Peoples R China
[4] Xidian Univ, State Key Lab Integrated Serv Networks, Xian 710071, Shaanxi, Peoples R China
[5] Dalian Univ Technol, Key Lab Ubiquitous Network & Serv Software Liaoni, Dalian 116620, Peoples R China
[6] Dalian Univ Technol, Sch Math Sci, Dalian 116024, Peoples R China
[7] Liaoning Normal Univ, Sch Math, Dalian 116029, Peoples R China
[8] Cardiff Univ, Sch Comp Sci & Informat, Cardiff CF24 3AA, S Glam, Wales
关键词
Low-rank representation; subspace learning; large-scale data; dynamic data; online learning; ROBUST VISUAL TRACKING; DIMENSIONALITY REDUCTION;
D O I
10.1109/TIP.2017.2760510
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Benefiting from global rank constraints, the low-rank representation (LRR) method has been shown to be an effective solution to subspace learning. However, the global mechanism also means that the LRR model is not suitable for handling large-scale data or dynamic data. For large-scale data, the LRR method suffers from high time complexity, and for dynamic data, it has to recompute a complex rank minimization for the entire data set whenever new samples are dynamically added, making it prohibitively expensive. Existing attempts to online LRR either take a stochastic approach or build the representation purely based on a small sample set and treat new input as out-of-sample data. The former often requires multiple runs for good performance and thus takes longer time to run, and the latter formulates online LRR as an out-of-sample classification problem and is less robust to noise. In this paper, a novel online LRR subspace learning method is proposed for both large-scale and dynamic data. The proposed algorithm is composed of two stages: static learning and dynamic updating. In the first stage, the subspace structure is learned from a small number of data samples. In the second stage, the intrinsic principal components of the entire data set are computed incrementally by utilizing the learned subspace structure, and the LRR matrix can also be incrementally solved by an efficient online singular value decomposition algorithm. The time complexity is reduced dramatically for large-scale data, and repeated computation is avoided for dynamic problems. We further perform theoretical analysis comparing the proposed online algorithm with the batch LRR method. Finally, experimental results on typical tasks of subspace recovery and subspace clustering show that the proposed algorithm performs comparably or better than batch methods, including the batch LRR, and significantly outperforms state-of-the-art online methods.
引用
收藏
页码:335 / 348
页数:14
相关论文
共 38 条
[1]  
[Anonymous], 2012, P INT C ART INT STAT
[2]  
[Anonymous], 2011, P ADV NEUR INF PROC
[3]  
[Anonymous], 2011, WWW
[4]  
[Anonymous], 2016, IJCAI
[5]  
[Anonymous], 2014, Proceedings of the 28th Annual Conference on Neural Information Processing Systems
[6]   Laplacian eigenmaps for dimensionality reduction and data representation [J].
Belkin, M ;
Niyogi, P .
NEURAL COMPUTATION, 2003, 15 (06) :1373-1396
[7]   Document clustering using locality preserving indexing [J].
Cai, D ;
He, XF ;
Han, JW .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2005, 17 (12) :1624-1637
[8]   Robust Principal Component Analysis? [J].
Candes, Emmanuel J. ;
Li, Xiaodong ;
Ma, Yi ;
Wright, John .
JOURNAL OF THE ACM, 2011, 58 (03)
[9]   Exact Matrix Completion via Convex Optimization [J].
Candes, Emmanuel J. ;
Recht, Benjamin .
FOUNDATIONS OF COMPUTATIONAL MATHEMATICS, 2009, 9 (06) :717-772
[10]  
Cheng B, 2011, IEEE I CONF COMP VIS, P2439, DOI 10.1109/ICCV.2011.6126528