Tensor principal component analysis via convex optimization

被引:0
作者
Bo Jiang
Shiqian Ma
Shuzhong Zhang
机构
[1] Shanghai University of Finance and Economics,Research Center for Management Science and Data Analytics, School of Information Management and Engineering
[2] The Chinese University of Hong Kong,Department of Systems Engineering and Engineering Management
[3] University of Minnesota,Department of Industrial and Systems Engineering
来源
Mathematical Programming | 2015年 / 150卷
关键词
Tensor; Principal component analysis; Low rank ; Nuclear norm; Semidefinite programming relaxation; 15A69; 15A03; 62H25; 90C22; 15A18;
D O I
暂无
中图分类号
学科分类号
摘要
This paper is concerned with the computation of the principal components for a general tensor, known as the tensor principal component analysis (PCA) problem. We show that the general tensor PCA problem is reducible to its special case where the tensor in question is super-symmetric with an even degree. In that case, the tensor can be embedded into a symmetric matrix. We prove that if the tensor is rank-one, then the embedded matrix must be rank-one too, and vice versa. The tensor PCA problem can thus be solved by means of matrix optimization under a rank-one constraint, for which we propose two solution methods: (1) imposing a nuclear norm penalty in the objective to enforce a low-rank solution; (2) relaxing the rank-one constraint by semidefinite programming. Interestingly, our experiments show that both methods can yield a rank-one solution for almost all the randomly generated instances, in which case solving the original tensor PCA problem to optimality. To further cope with the size of the resulting convex optimization models, we propose to use the alternating direction method of multipliers, which reduces significantly the computational efforts. Various extensions of the model are considered as well.
引用
收藏
页码:423 / 457
页数:34
相关论文
共 94 条
[1]  
Alizadeh F(1993)Interior point methods in semidefinite programming with applications to combinatorial optimization SIAM J. Optim. 5 13-51
[2]  
Boyd S.(2011)Distributed optimization and statistical learning via the alternating direction method of multipliers Found. Trends Mach. Learn. 3 1-122
[3]  
Parikh N.(2009)Exact matrix completion via convex optimization Found. Comput. Math. 9 717-772
[4]  
Chu E.(2006)Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information IEEE Trans. Inf. Theory 52 489-509
[5]  
Peleato B.(2009)The power of convex relaxation: near-optimal matrix completion IEEE Trans. Inf. Theory 56 2053-2080
[6]  
Eckstein J.(1970)Analysis of individual differences in multidimensional scaling via an n-way generalization of “eckart-young” decomposition Psychometrika 35 283-319
[7]  
Candès EJ(2012)The convex geometry of linear inverse problems Found. Comput. Math. 12 805-849
[8]  
Recht B(2012)Maximum block improvement and polynomial optimization SIAM J. Optim. 22 87-107
[9]  
Candès EJ(2008)Symmetric tensors and symmetric tensor rank SIAM J. Matrix Anal. Appl. 30 1254-1279
[10]  
Romberg J(2006)Compressed sensing IEEE Trans. Inf. Theory 52 1289-1306