Efficient Sparse Representation for Learning With High-Dimensional Data

被引:13
作者
Chen, Jie [1 ]
Yang, Shengxiang [2 ]
Wang, Zhu [3 ]
Mao, Hua [4 ]
机构
[1] Sichuan Univ, Coll Comp Sci, Chengdu 610065, Peoples R China
[2] De Montfort Univ, Sch Comp Sci & Informat, Leicester LEI 9BH, Leics, England
[3] Sichuan Univ, Law Sch, Chengdu 610065, Peoples R China
[4] Northumbria Univ, Dept Comp & Informat Sci, Newcastle Upon Tyne NE1 8ST, Tyne & Wear, England
基金
中国国家自然科学基金;
关键词
Sparse matrices; Optimization; Iterative algorithms; Dictionaries; Convex functions; Approximation algorithms; Convergence; Linear representation; low-dimensional structures; probabilistic simplex; sparse representation; FACE RECOGNITION; SHRINKAGE; ALGORITHM;
D O I
10.1109/TNNLS.2021.3119278
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Due to the capability of effectively learning intrinsic structures from high-dimensional data, techniques based on sparse representation have begun to display an impressive impact on several fields, such as image processing, computer vision, and pattern recognition. Learning sparse representations isoften computationally expensive due to the iterative computations needed to solve convex optimization problems in which the number of iterations is unknown before convergence. Moreover, most sparse representation algorithms focus only on determining the final sparse representation results and ignore the changes in the sparsity ratio (SR) during iterative computations. In this article, two algorithms are proposed to learn sparse representations based on locality-constrained linear representation learning with probabilistic simplex constraints. Specifically, the first algorithm, called approximated local linear representation (ALLR), obtains a closed-form solution from individual locality-constrained sparse representations. The second algorithm, called ALLR with symmetric constraints (ALLR $_{ SC}$ ), further obtains a symmetric sparse representation result with a limited number of computations; notably, the sparsity and convergence of sparse representations can be guaranteed based on theoretical analysis. The steady decline in the SR during iterative computations is a critical factor in practical applications. Experimental results based on public datasets demonstrate that the proposed algorithms perform better than several state-of-the-art algorithms for learning with high-dimensional data.
引用
收藏
页码:4208 / 4222
页数:15
相关论文
共 56 条
[1]   Fast Matrix Multiplication: Limitations of the Coppersmith-Winograd Method [J].
Ambainis, Andris ;
Filmus, Yuval ;
Le Gall, Francois .
STOC'15: PROCEEDINGS OF THE 2015 ACM SYMPOSIUM ON THEORY OF COMPUTING, 2015, :585-593
[2]  
[Anonymous], 1991, ADV NEURAL INFORM PR
[3]  
Axiotis K, 2020, PR MACH LEARN RES, V119
[4]  
Boob D., 2020, PROC ADV NEURAL INF, P1
[5]   l0-Motivated Low-Rank Sparse Subspace Clustering [J].
Brbic, Maria ;
Kopriva, Ivica .
IEEE TRANSACTIONS ON CYBERNETICS, 2020, 50 (04) :1711-1725
[6]   Low-rank representation with adaptive dictionary learning for subspace clustering [J].
Chen, Jie ;
Mao, Hua ;
Wang, Zhu ;
Zhang, Xinpei .
KNOWLEDGE-BASED SYSTEMS, 2021, 223
[7]   Symmetric low-rank preserving projections for subspace learning [J].
Chen, Jie ;
Mao, Hua ;
Zhang, Haixian ;
Yi, Zhang .
NEUROCOMPUTING, 2018, 315 :381-393
[8]   Subspace clustering using a symmetric low-rank representation [J].
Chen, Jie ;
Mao, Hua ;
Sang, Yongsheng ;
Yi, Zhang .
KNOWLEDGE-BASED SYSTEMS, 2017, 127 :46-57
[9]   Symmetric low-rank representation for subspace clustering [J].
Chen, Jie ;
Zhang, Haixian ;
Mao, Hua ;
Sang, Yongsheng ;
Yi, Zhang .
NEUROCOMPUTING, 2016, 173 :1192-1202
[10]   Stochastic Sparse Subspace Clustering [J].
Chen, Ying ;
Li, Chun-Guang ;
You, Chong .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :4154-4163