Data representation learning via dictionary learning and self-representation

被引:0
作者
Zeng, Deyu [1 ,2 ]
Su, Jing [3 ]
Wu, Zongze [1 ,3 ,4 ]
Ding, Chris [5 ]
Ren, Zhigang [3 ]
机构
[1] Shenzhen Univ, Coll Mechatron & Control Engn, Shenzhen 518060, Peoples R China
[2] Shenzhen Univ, Coll Comp Sci & Software Engn, Shenzhen 518060, Peoples R China
[3] Guangdong Univ Technol, Sch Automat, Guangzhou 510006, Peoples R China
[4] Shenzhen Univ, Guangdong Lab Artificial Intelligence & Digital E, Shenzhen 518060, Peoples R China
[5] Chinese Univ Hong Kong Shenzhen, Sch Data Sci, Shenzhen 518172, Peoples R China
关键词
Dictionary learning; Low-rank representation; Data representation; Self-representation; ROBUST FACE RECOGNITION; SPARSE REPRESENTATION; ALGORITHM;
D O I
10.1007/s10489-023-04902-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Dictionary learning is an effective feature learning method, leading to many remarkable results in data representation and classification tasks. However, dictionary learning is performed on the original data representation. In some cases, the capability of representation and discriminability of the learned dictionaries may need to be performed better, i.e., with only sparse but not low rank. In this paper, we propose a novel efficient data representation learning method by combining dictionary learning and self-representation, which utilizes both properties of sparsity in dictionary learning and low-rank in low-rank representation (LRR) simultaneously. Thus both the sparse and low-rank properties of the data representation can be naturally captured by our method. To obtain the solution of our proposed method effectively, we also innovatively introduce a more generalized data representation model in this paper. To our best knowledge, its closed-form solution is first derived analytically through our rigorous mathematical analysis. Experimental results show that our method not only can be used for data pre-processing but also can realize better dictionary learning. The samples in the same class can have similar representations by our method, and the discriminability of the learned dictionary can also be enhanced.
引用
收藏
页码:26988 / 27000
页数:13
相关论文
共 36 条
[1]   K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation [J].
Aharon, Michal ;
Elad, Michael ;
Bruckstein, Alfred .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2006, 54 (11) :4311-4322
[2]   Robust Principal Component Analysis? [J].
Candes, Emmanuel J. ;
Li, Xiaodong ;
Ma, Yi ;
Wright, John .
JOURNAL OF THE ACM, 2011, 58 (03)
[3]   Mixture correntropy for robust learning [J].
Chen, Badong ;
Wang, Xin ;
Lu, Na ;
Wang, Shiyuan ;
Cao, Jiuwen ;
Qin, Jing .
PATTERN RECOGNITION, 2018, 79 :318-327
[4]   Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture [J].
Chen, C. L. Philip ;
Liu, Zhulin .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (01) :10-24
[5]  
Chen J, 2014, COMPUT SCI
[6]   Low-rank representation with adaptive dictionary learning for subspace clustering [J].
Chen, Jie ;
Mao, Hua ;
Wang, Zhu ;
Zhang, Xinpei .
KNOWLEDGE-BASED SYSTEMS, 2021, 223
[7]   Sparse and Low-Rank Representation With Key Connectivity for Hyperspectral Image Classification [J].
Ding, Yun ;
Chong, Yanwen ;
Pan, Shaoming .
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2020, 13 :5609-5622
[8]  
Donoho D., 2000, HIGH DIMENSIONAL DAT
[9]   Sparse Subspace Clustering: Algorithm, Theory, and Applications [J].
Elhamifar, Ehsan ;
Vidal, Rene .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (11) :2765-2781
[10]  
Elhamifar E, 2009, PROC CVPR IEEE, P2782