Maximally Correlated Principal Component Analysis Based on Deep Parameterization Learning

被引:13
作者
Chen, Haoran [1 ]
Li, Jinghua [1 ]
Gao, Junbin [2 ]
Sun, Yanfeng [1 ]
Hu, Yongli [1 ]
Yin, Baocai [3 ]
机构
[1] Beijing Univ Technol, Fac Informat Technol, Beijing Municipal Key Lab Multimedia & Intelligen, Beijing 100124, Peoples R China
[2] Univ Sydney, Business Sch, Discipline Business Analyt, Sydney, NSW 2006, Australia
[3] Dalian Univ Technol, Fac Elect Informat & Elect Engn, Dalian 116024, Peoples R China
基金
北京市自然科学基金;
关键词
Maximally correlated principal component analysis; deep parameterization learning; back propagation; classification; NONLINEAR DIMENSIONALITY REDUCTION; FEATURE-EXTRACTION; KERNEL PCA;
D O I
10.1145/3332183
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Dimensionality reduction is widely used to deal with high-dimensional data. As a famous dimensionality reduction method, principal component analysis (PCA) aiming at finding the low dimension feature of original data has made great successes, and many improved PCA algorithms have been proposed. However, most algorithms based on PCA only consider the linear correlation of data features. In this article, we propose a novel dimensionality reduction model called maximally correlated PCA based on deep parameterization learning (MCPCADP), which takes nonlinear correlation into account in the deep parameterization framework for the purpose of dimensionality reduction. The new model explores nonlinear correlation by maximizing Ky-Fan norm of the covariance matrix of nonlinearly mapped data features. A new BP algorithm for model optimization is derived. In order to assess the proposed method, we conduct experiments on both a synthetic database and several real-world databases. The experimental results demonstrate that the proposed algorithm is comparable to several widely used algorithms.
引用
收藏
页数:17
相关论文
共 36 条
[1]  
[Anonymous], 2010, LECT NOTES COMPUT SC
[2]  
[Anonymous], INT C MACH LEARN
[3]  
[Anonymous], 1993, Advances in neural information processing systems
[4]  
[Anonymous], 2012, MACHINE LEARNING PRO
[5]  
Boumal N, 2014, J MACH LEARN RES, V15, P1455
[6]   Convex Sparse PCA for Unsupervised Feature Learning [J].
Chang, Xiaojun ;
Nie, Feiping ;
Yang, Yi ;
Zhang, Chengqi ;
Huang, Heng .
ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2016, 11 (01)
[7]   Robust kernel Isomap [J].
Choi, Heeyoul ;
Choi, Seungjin .
PATTERN RECOGNITION, 2007, 40 (03) :853-862
[8]  
Feizi S., 2017, ARXIV170205471V2
[9]   A Randomized Rounding Algorithm for Sparse PCA [J].
Fountoulakis, Kimon ;
Kundu, Abhisek ;
Kontopoulou, Eugenia-Maria ;
Drineas, Petros .
ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2017, 11 (03)
[10]  
Gao Zekai J., 2016, ACM T KNOWL DISCOV D, V11, P1