A fast randomized algorithm for overdetermined linear least-squares regression

被引:136
作者
Rokhlin, Vladimir [1 ]
Tygert, Mark [1 ]
机构
[1] Yale Univ, Program Appl Math, New Haven, CT 06511 USA
关键词
D O I
10.1073/pnas.0804869105
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
We introduce a randomized algorithm for overdetermined linear least-squares regression. Given an arbitrary full-rank m x n matrix A with m >= n, any m x 1 vector b, and any positive real number 6, the procedure computes an n x 1 vector x such that x minimizes the Euclidean norm parallel to Ax - b parallel to to relative precision epsilon. The algorithm typically requires O((log(n) + log(1/epsilon))mn + n(3)) floating-point operations. This cost is less than the O(mn(2)) required by the classical schemes based on QR-decompositions or bidiagonalization. We present several numerical examples illustrating the performance of the algorithm.
引用
收藏
页码:13212 / 13217
页数:6
相关论文
共 9 条
[1]  
AILON N, 2007, 1385 YAL U DEP COMP
[2]  
AILON N, 2007, SIAM J COMP IN PRESS
[3]  
Bjorck A., 1996, NUMERICAL METHODS LE, DOI DOI 10.1137/1.9781611971484
[4]  
DRINEAS P, 2007, 07101435 ARXIV
[5]  
Golub G. H., 1996, MATRIX COMPUTATIONS
[6]   Randomized algorithms for the low-rank approximation of matrices [J].
Liberty, Edo ;
Woolfe, Franco ;
Martinsson, Per-Gunnar ;
Rolchlin, Vladimir ;
Tyger, Mark .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2007, 104 (51) :20167-20172
[7]  
Sarlós T, 2006, ANN IEEE SYMP FOUND, P143
[8]  
Sarlos Tamas, 2006, IMPROVED APPROXIMATI
[9]  
WOOLFE F, 2007, APPL COMPUT IN PRESS