Accelerating Ill-Conditioned Low-Rank Matrix Estimation via Scaled Gradient Descent

被引:0
|
作者
Tong, Tian [1 ]
Ma, Cong [2 ]
Chi, Yuejie [1 ]
机构
[1] Carnegie Mellon Univ, Dept Elect & Comp Engn, Pittsburgh, PA 15213 USA
[2] Univ Calif Berkeley, Dept Elect Engn & Comp Sci, Berkeley, CA 94720 USA
关键词
low-rank matrix factorization; scaled gradient descent; ill-conditioned matrix recovery; matrix sensing; robust PCA; matrix completion; general losses; NONCONVEX OPTIMIZATION; FACTORIZATION; INCOHERENCE; COMPLETION; GUARANTEES; RECOVERY;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Low-rank matrix estimation is a canonical problem that finds numerous applications in signal processing, machine learning and imaging science. A popular approach in practice is to factorize the matrix into two compact low-rank factors, and then optimize these factors directly via simple iterative methods such as gradient descent and alternating minimization. Despite nonconvexity, recent literatures have shown that these simple heuristics in fact achieve linear convergence when initialized properly for a growing number of problems of interest. However, upon closer examination, existing approaches can still be computationally expensive especially for ill-conditioned matrices: the convergence rate of gradient descent depends linearly on the condition number of the low-rank matrix, while the per-iteration cost of alternating minimization is often prohibitive for large matrices. The goal of this paper is to set forth a competitive algorithmic approach dubbed Scaled Gradient Descent (ScaledGD) which can be viewed as preconditioned or diagonally-scaled gradient descent, where the preconditioners are adaptive and iteration-varying with a minimal computational overhead. With tailored variants for low-rank matrix sensing, robust principal component analysis and matrix completion, we theoretically show that ScaledGD achieves the best of both worlds: it converges linearly at a rate independent of the condition number of the low-rank matrix similar as alternating minimization, while maintaining the low per-iteration cost of gradient descent. Our analysis is also applicable to general loss functions that are restricted strongly convex and smooth over low-rank matrices. To the best of our knowledge, ScaledGD is the first algorithm that provably has such properties over a wide range of low-rank matrix estimation tasks. At the core of our analysis is the introduction of a new distance function that takes account of the preconditioners when measuring the distance between the iterates and the ground truth. Finally, numerical examples are provided to demonstrate the effectiveness of ScaledGD in accelerating the convergence rate of ill-conditioned low-rank matrix estimation in a wide number of applications. Keywords: low-rank matrix factorization, scaled gradient descent, ill-conditioned matrix recovery, matrix sensing, robust PCA, matrix completion, general losses
引用
收藏
页数:63
相关论文
共 50 条
  • [1] Accelerating ill-conditioned low-rank matrix estimation via scaled gradient descent
    Tong, Tian
    Ma, Cong
    Chi, Yuejie
    Journal of Machine Learning Research, 2021, 22 : 1 - 63
  • [2] Fast Low-Rank Matrix Estimation for Ill-Conditioned Matrices
    Soltani, Mohammadreza
    Hegde, Chinmay
    2018 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2018, : 371 - 375
  • [3] ACCELERATING ILL-CONDITIONED ROBUST LOW-RANK TENSOR REGRESSION
    Tong, Tian
    Ma, Cong
    Chi, Yuejie
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 9072 - 9076
  • [4] Scaled stochastic gradient descent for low-rank matrix completion
    Mishra, Bamdev
    Sepulchre, Rodolphe
    2016 IEEE 55TH CONFERENCE ON DECISION AND CONTROL (CDC), 2016, : 2820 - 2825
  • [5] Online Completion of Ill-conditioned Low-Rank Matrices
    Kennedy, Ryan
    Taylor, Camillo J.
    Balzano, Laura
    2014 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP), 2014, : 507 - 511
  • [6] Convergence of Gradient Descent for Low-Rank Matrix Approximation
    Pitaval, Renaud-Alexandre
    Dai, Wei
    Tirkkonen, Olav
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2015, 61 (08) : 4451 - 4457
  • [7] Low-Rank Gradient Descent
    Cosson, Romain
    Jadbabaie, Ali
    Makur, Anuran
    Reisizadeh, Amirhossein
    Shah, Devavrat
    IEEE OPEN JOURNAL OF CONTROL SYSTEMS, 2023, 2 : 380 - 395
  • [8] Accelerated Factored Gradient Descent for Low-Rank Matrix Factorization
    Zhou, Dongruo
    Cao, Yuan
    Gu, Quanquan
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 4430 - 4439
  • [9] Adaptive Low-Rank Gradient Descent
    Jadbabaie, Ali
    Makur, Anuran
    Reisizadeh, Amirhossein
    2023 62ND IEEE CONFERENCE ON DECISION AND CONTROL, CDC, 2023, : 3315 - 3320
  • [10] Fast Gradient Method for Low-Rank Matrix Estimation
    Hongyi Li
    Zhen Peng
    Chengwei Pan
    Di Zhao
    Journal of Scientific Computing, 2023, 96