Uniform Sampling for Matrix Approximation

被引:93
作者
Cohen, Michael B. [1 ]
Lee, Yin Tat
Musco, Cameron
Musco, Christopher
Peng, Richard
Sidford, Aaron
机构
[1] MIT, Dept EECS, 77 Massachusetts Ave, Cambridge, MA 02139 USA
来源
PROCEEDINGS OF THE 6TH INNOVATIONS IN THEORETICAL COMPUTER SCIENCE (ITCS'15) | 2015年
基金
美国国家科学基金会;
关键词
Regression; leverage scores; matrix sampling; randomized numerical linear algebra; MONTE-CARLO ALGORITHMS; FAST RANDOMIZED ALGORITHM;
D O I
10.1145/2688073.2688113
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Random sampling has become a critical tool in solving massive matrix problems. For linear regression, a small, manageable set of data rows can be randomly selected to approximate a tall, skinny data matrix, improving processing time significantly. For theoretical performance guarantees, each row must be sampled with probability proportional to its statistical leverage score. Unfortunately, leverage scores are difficult to compute. A simple alternative is to sample rows uniformly at random. While this often works, uniform sampling will eliminate critical row information for many natural instances. We take a fresh look at uniform sampling by examining what information it does preserve. Specifically, we show that uniform sampling yields a matrix that, in some sense, well approximates a large fraction of the original. While this weak form of approximation is not enough for solving linear regression directly, it is enough to compute a better approximation. This observation leads to simple iterative row sampling algorithms for matrix approximation that run in input-sparsity time and preserve row structure and sparsity at all intermediate steps. In addition to an improved understanding of uniform sampling, our main proof introduces a structural result of independent interest: we show that every matrix can be made to have low coherence by reweighting a small subset of its rows.
引用
收藏
页码:181 / 190
页数:10
相关论文
共 29 条
  • [11] Fast Monte Carlo algorithms for matrices III: Computing a compressed approximate matrix decomposition
    Drineas, Petros
    Kannan, Ravi
    Mahoney, Michael W.
    [J]. SIAM JOURNAL ON COMPUTING, 2006, 36 (01) : 184 - 206
  • [12] Sampling Algorithms for l2 Regression and Applications
    Drineas, Petros
    Mahoney, Michael W.
    Muthukrishnan, S.
    [J]. PROCEEDINGS OF THE SEVENTHEENTH ANNUAL ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS, 2006, : 1127 - +
  • [13] Drineas P, 2006, LECT NOTES COMPUT SC, V4110, P316
  • [14] Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions
    Halko, N.
    Martinsson, P. G.
    Tropp, J. A.
    [J]. SIAM REVIEW, 2011, 53 (02) : 217 - 288
  • [15] Johnson W. B., 1984, Contemporary mathematics, V26, P189
  • [16] Approaching Optimality For Solving SDD Linear Systems
    Koutis, Ioannis
    Miller, Gary L.
    Peng, Richard
    [J]. 2010 IEEE 51ST ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE, 2010, : 235 - 244
  • [17] Kumar S, 2012, J MACH LEARN RES, V13, P981
  • [18] Iterative Row Sampling
    Li, Mu
    Miller, Gary L.
    Peng, Richard
    [J]. 2013 IEEE 54TH ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS), 2013, : 127 - 136
  • [19] Randomized Algorithms for Matrices and Data
    Mahoney, Michael W.
    [J]. FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2011, 3 (02): : 123 - 224
  • [20] Meng X., 2014, SIAM J SCI COMPUTING, V36