Nonlinear dimensionality reduction based on dictionary learning

被引:0
|
作者
Zheng S.-L. [1 ]
Li Y.-X. [1 ]
Wei X. [2 ]
Peng X.-S. [1 ]
机构
[1] School of Aeronautics and Astronautics, Shanghai Jiao Tong University, Shanghai
[2] Department of Electrical and Computer Engineering, Technische Universitaet München, Munich
来源
基金
中国国家自然科学基金;
关键词
Compressed sensing (CS); Dictionary learning; Dimensionality reduction (DR); Sparse representation (SR);
D O I
10.16383/j.aas.2016.c150557
中图分类号
学科分类号
摘要
Most classic dimensionality reduction (DR) algorithms (such as principle component analysis (PCA) and isometric mapping (ISOMAP)) focus on finding a low-dimensional embedding of original data, which are often not reversible. It is still challenging to make DR processes reversible in many applications. Sparse representation (SR) has shown its power on signal reconstruction and denoising. To tackle the problem of large scale dataset processing, this work focuses on developing a differentiable model for invertible DR based on SR. From high-dimensional input signal to the low-dimensional feature, we expect to preserve some important geometric features (such as inner product, distance and angle) such that the reliable reconstruction from the low dimensional space back to the original high dimensional space is possible. We employ the algorithm called concentrated dictionary learning (CDL) to train the high dimensional dictionary to concentrate the energy in its low dimensional subspace. Then we design a paired dictionaries: D and P, where D is used to obtain the sparse representation and P is a direct down-sampling of D. CDL can ensure P to capture the most energy of D. Then, the problem about signal reconstruction is transformed into how to train dictionaries D and P, so the process of input signal X to feature Y is transformed into the process of energy retention from D to P. Experimental results show that without the restrictions of linear projection using restricted isometry property (RIP), CDL can reconstruct the image at a lower dimensional space and outperform state-of-the-art DR methods (such as Gaussian random compressive sensing). In addition, for noise-corrupted images, CDL can obtain better compression performance than JPEG2000. Copyright © 2016 Acta Automatica Sinica. All rights reserved.
引用
收藏
页码:1065 / 1076
页数:11
相关论文
共 25 条
  • [1] Van Der Maaten L.J.P., Postma E.O., Van Den Herik H.J., Dimensionality reduction: a comparative review, Journal of Machine Learning Research, 10, pp. 66-71, (2009)
  • [2] Donoho D.L., Compressed sensing, IEEE Transactions on Information Theory, 52, 4, pp. 1289-1306, (2006)
  • [3] Candes E.J., Romberg J., Tao T., Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information, IEEE Transactions on Information Theory, 52, 2, pp. 489-509, (2006)
  • [4] Tropp J.A., Gilbert A.C., Signal recovery from random measurements via orthogonal matching pursuit, IEEE Transactions on Information Theory, 53, 12, pp. 4655-4666, (2007)
  • [5] Mairal J., Bach F., Ponce J., Task-driven dictionary learning, IEEE Transactions on Pattern Analysis and Machine Intelligence, 34, 4, pp. 791-804, (2012)
  • [6] Gao J.B., Shi Q.F., Caetano T.S., Dimensionality reduction via compressive sensing, Pattern Recognition Letters, 33, 9, pp. 1163-1170, (2012)
  • [7] Gkioulekas I.A., Zickler T., Dimensionality reduction using the sparse linear model, Proceedings of the 2011 Advances in Neural Information Processing Systems, pp. 271-279, (2011)
  • [8] Calderbank R., Jafarpour S., Schapire R., Compressed Learning: Universal Sparse Dimensionality Reduction and Learning in the Measurement Domain, Technical Report, (2009)
  • [9] Baraniuk R.G., Wakin M.B., Random projections of smooth manifolds, Foundations of Computational Mathematics, 9, 1, pp. 51-77, (2009)
  • [10] Hegde C., Wakin M.B., Baraniuk R.G., Random projections for manifold learning, Proceedings of the 2008 Advances in Neural Information Processing Systems, pp. 641-648, (2008)