ESTIMATION OF HIGH-DIMENSIONAL LOW-RANK MATRICES

被引:209
|
作者
Rohde, Angelika [1 ]
Tsybakov, Alexandre B. [2 ]
机构
[1] Univ Hamburg, Fachbereich Math, D-20146 Hamburg, Germany
[2] CREST, Stat Lab, F-92240 Malakoff, France
来源
ANNALS OF STATISTICS | 2011年 / 39卷 / 02期
关键词
High-dimensional low-rank matrices; empirical process; sparse recovery; Schatten norm; penalized least-squares estimator; quasi-convex Schatten class embeddings; TRACE-NORM; AGGREGATION; CONSISTENCY; SELECTION; ENTROPY;
D O I
10.1214/10-AOS860
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Suppose that we observe entries or, more generally, linear combinations of entries of an unknown m x T-matrix A corrupted by noise. We are particularly interested in the high-dimensional setting where the number mT of unknown entries can be much larger than the sample size N. Motivated by several applications, we consider estimation of matrix A under the assumption that it has small rank. This can be viewed as dimension reduction or sparsity assumption. In order to shrink toward a low-rank representation, we investigate penalized least squares estimators with a Schatten-p quasi-norm penalty term, p <= 1. We study these estimators under two possible assumptions-a modified version of the restricted isometry condition and a uniform bound on the ratio "empirical norm induced by the sampling operator/Frobenius norm." The main results are stated as nonasymptotic upper bounds on the prediction risk and on the Schatten-q risk of the estimators, where q is an element of [p, 2]. The rates that we obtain for the prediction risk are of the form rmIN (for m = T), up to logarithmic factors, where r is the rank of A. The particular examples of multi-task learning and matrix completion are worked out in detail. The proofs are based on tools from the theory of empirical processes. As a by-product, we derive bounds for the kth entropy numbers of the quasi-convex Schatten class embeddings, S-p(M) -> S-2(M) p < 1, which are of independent interest.
引用
收藏
页码:887 / 930
页数:44
相关论文
共 50 条
  • [1] ESTIMATION OF (NEAR) LOW-RANK MATRICES WITH NOISE AND HIGH-DIMENSIONAL SCALING
    Negahban, Sahand
    Wainwright, Martin J.
    ANNALS OF STATISTICS, 2011, 39 (02): : 1069 - 1097
  • [2] Reconstruction of a high-dimensional low-rank matrix
    Yata, Kazuyoshi
    Aoshima, Makoto
    ELECTRONIC JOURNAL OF STATISTICS, 2016, 10 (01): : 895 - 917
  • [3] High-dimensional VAR with low-rank transition
    Alquier, Pierre
    Bertin, Karine
    Doukhan, Paul
    Garnier, Remy
    STATISTICS AND COMPUTING, 2020, 30 (04) : 1139 - 1153
  • [4] High-dimensional VAR with low-rank transition
    Pierre Alquier
    Karine Bertin
    Paul Doukhan
    Rémy Garnier
    Statistics and Computing, 2020, 30 : 1139 - 1153
  • [5] High-dimensional covariance matrix estimation using a low-rank and diagonal decomposition
    Wu, Yilei
    Qin, Yingli
    Zhu, Mu
    CANADIAN JOURNAL OF STATISTICS-REVUE CANADIENNE DE STATISTIQUE, 2020, 48 (02): : 308 - 337
  • [6] Low-rank Riemannian eigensolver for high-dimensional Hamiltonians
    Rakhuba, Maxim
    Novikov, Alexander
    Oseledets, Ivan
    JOURNAL OF COMPUTATIONAL PHYSICS, 2019, 396 : 718 - 737
  • [7] On the detection of low rank matrices in the high-dimensional regime
    Chevreuil, Antoine
    Loubaton, Philippe
    2018 26TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2018, : 1102 - 1106
  • [8] Low-Rank Bandit Methods for High-Dimensional Dynamic Pricing
    Mueller, Jonas
    Syrgkanis, Vasilis
    Taddy, Matt
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [9] Low-rank numerical approximations for high-dimensional Lindblad equations
    Le Bris, C.
    Rouchon, P.
    PHYSICAL REVIEW A, 2013, 87 (02):
  • [10] Low-rank diffusion matrix estimation for high-dimensional time-changed Levy processes
    Belomestny, Denis
    Trabs, Mathias
    ANNALES DE L INSTITUT HENRI POINCARE-PROBABILITES ET STATISTIQUES, 2018, 54 (03): : 1583 - 1621