Demixed Sparse Principal Component Analysis Through Hybrid Structural Regularizers

被引:1
作者
Zhang, Yan [1 ,2 ]
Xu, Haoqing [1 ,2 ]
机构
[1] Southeast Univ, Sch Comp Sci & Engn, Nanjing 211189, Jiangsu, Peoples R China
[2] Southeast Univ, Sch Artificial Intelligence, Nanjing 211189, Jiangsu, Peoples R China
关键词
Task analysis; Principal component analysis; Optimization; Neurons; Neural activity; Decoding; Matrix decomposition; Dimensionality reduction; sparse principal component analysis; demixed principal component analysis; parallel proximal algorithms; PARAMETRIC WORKING-MEMORY; SAMPLE-SIZE; POWER METHOD; MATRIX; COMPUTATION; SELECTION; PERFORM;
D O I
10.1109/ACCESS.2021.3098614
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, the sparse representation of multivariate data has gained great popularity in real-world applications like neural activity analysis. Many previous analyses for these data utilize sparse principal component analysis (SPCA) to obtain a sparse representation. However, l(0)-norm based SPCA suffers from non-differentiability and local optimum problems due to non-convex regularization. Additionally, extracting dependencies between task parameters and feature responses is essential for further analysis while SPCA usually generates components without demixing these dependencies. To address these problems, we propose a novel approach, demixed sparse principal component analysis (dSPCA), that relaxes the non-convex constraints into convex regularizers, e.g., l(1)-norm and nuclear norm, and demixes dependencies of feature response on various task parameters by optimizing the loss function with marginalized data. The sparse and demixed components greatly improve the interpretability of the multivariate data. We also develop a parallel proximal algorithm to accelerate the optimization for hybrid regularizers based on our method. We provide theoretical analyses for error bound and convergency. We apply our method on simulated datasets to evaluate its time cost, the ability to explain the demixed information, and the ability to recover sparsity for the reconstructed data. Finally, we successfully separate the neural activity into different task parameters like stimulus or decision, and visualize the demixed components based on the real-world dataset.
引用
收藏
页码:103075 / 103090
页数:16
相关论文
共 57 条
  • [1] Principal component analysis
    Abdi, Herve
    Williams, Lynne J.
    [J]. WILEY INTERDISCIPLINARY REVIEWS-COMPUTATIONAL STATISTICS, 2010, 2 (04): : 433 - 459
  • [2] DeMix: deconvolution for mixed cancer transcriptomes using raw measured data
    Ahn, Jaeil
    Yuan, Ying
    Parmigiani, Giovanni
    Suraokar, Milind B.
    Diao, Lixia
    Wistuba, Ignacio I.
    Wang, Wenyi
    [J]. BIOINFORMATICS, 2013, 29 (15) : 1865 - 1871
  • [3] [Anonymous], 2013, ADV NEURAL INF PROCE
  • [4] [Anonymous], JMLR WORKSH C P 2010
  • [5] Decomposition into low-rank plus additive matrices for background/foreground separation: A review for a comparative evaluation with a large-scale dataset
    Bouwmans, Thierry
    Sobral, Andrews
    Javed, Sajid
    Jung, Soon Ki
    Zahzah, El-Hadi
    [J]. COMPUTER SCIENCE REVIEW, 2017, 23 : 1 - 71
  • [6] Distributed optimization and statistical learning via the alternating direction method of multipliers
    Boyd S.
    Parikh N.
    Chu E.
    Peleato B.
    Eckstein J.
    [J]. Foundations and Trends in Machine Learning, 2010, 3 (01): : 1 - 122
  • [7] Brendel W., 2011, Advances in Neural Information Processing Systems, P2654, DOI DOI 10.1038/S41593-021-00798-5
  • [8] Timing and neural encoding of somatosensory parametric working memory in macaque prefrontal cortex
    Brody, CD
    Hernández, A
    Zainos, A
    Romo, R
    [J]. CEREBRAL CORTEX, 2003, 13 (11) : 1196 - 1207
  • [9] Robust Principal Component Analysis?
    Candes, Emmanuel J.
    Li, Xiaodong
    Ma, Yi
    Wright, John
    [J]. JOURNAL OF THE ACM, 2011, 58 (03)
  • [10] A Parallel Random Forest Algorithm for Big Data in a Spark Cloud Computing Environment
    Chen, Jianguo
    Li, Kenli
    Tang, Zhuo
    Bilal, Kashif
    Yu, Shui
    Weng, Chuliang
    Li, Keqin
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2017, 28 (04) : 919 - 933