Accelerating Ill-Conditioned Low-Rank Matrix Estimation via Scaled Gradient Descent

被引:0
|
作者
Tong, Tian [1 ]
Ma, Cong [2 ]
Chi, Yuejie [1 ]
机构
[1] Carnegie Mellon Univ, Dept Elect & Comp Engn, Pittsburgh, PA 15213 USA
[2] Univ Calif Berkeley, Dept Elect Engn & Comp Sci, Berkeley, CA 94720 USA
关键词
low-rank matrix factorization; scaled gradient descent; ill-conditioned matrix recovery; matrix sensing; robust PCA; matrix completion; general losses; NONCONVEX OPTIMIZATION; FACTORIZATION; INCOHERENCE; COMPLETION; GUARANTEES; RECOVERY;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Low-rank matrix estimation is a canonical problem that finds numerous applications in signal processing, machine learning and imaging science. A popular approach in practice is to factorize the matrix into two compact low-rank factors, and then optimize these factors directly via simple iterative methods such as gradient descent and alternating minimization. Despite nonconvexity, recent literatures have shown that these simple heuristics in fact achieve linear convergence when initialized properly for a growing number of problems of interest. However, upon closer examination, existing approaches can still be computationally expensive especially for ill-conditioned matrices: the convergence rate of gradient descent depends linearly on the condition number of the low-rank matrix, while the per-iteration cost of alternating minimization is often prohibitive for large matrices. The goal of this paper is to set forth a competitive algorithmic approach dubbed Scaled Gradient Descent (ScaledGD) which can be viewed as preconditioned or diagonally-scaled gradient descent, where the preconditioners are adaptive and iteration-varying with a minimal computational overhead. With tailored variants for low-rank matrix sensing, robust principal component analysis and matrix completion, we theoretically show that ScaledGD achieves the best of both worlds: it converges linearly at a rate independent of the condition number of the low-rank matrix similar as alternating minimization, while maintaining the low per-iteration cost of gradient descent. Our analysis is also applicable to general loss functions that are restricted strongly convex and smooth over low-rank matrices. To the best of our knowledge, ScaledGD is the first algorithm that provably has such properties over a wide range of low-rank matrix estimation tasks. At the core of our analysis is the introduction of a new distance function that takes account of the preconditioners when measuring the distance between the iterates and the ground truth. Finally, numerical examples are provided to demonstrate the effectiveness of ScaledGD in accelerating the convergence rate of ill-conditioned low-rank matrix estimation in a wide number of applications. Keywords: low-rank matrix factorization, scaled gradient descent, ill-conditioned matrix recovery, matrix sensing, robust PCA, matrix completion, general losses
引用
收藏
页数:63
相关论文
共 50 条
  • [21] ROBUST LOW-RANK MATRIX ESTIMATION
    Elsener, Andreas
    van de Geer, Sara
    ANNALS OF STATISTICS, 2018, 46 (6B): : 3481 - 3509
  • [22] Non-convex low-rank matrix recovery with arbitrary outliers via median-truncated gradient descent
    Li, Yuanxin
    Chi, Yuejie
    Zhang, Huishuai
    Liang, Yingbin
    INFORMATION AND INFERENCE-A JOURNAL OF THE IMA, 2020, 9 (02) : 289 - 325
  • [23] HyLo: A Hybrid Low-Rank Natural Gradient Descent Method
    Mu, Baorun
    Soori, Saeed
    Can, Bugra
    Gurbuzbalaban, Mert
    Dehnavi, Maryam Mehri
    SC22: INTERNATIONAL CONFERENCE FOR HIGH PERFORMANCE COMPUTING, NETWORKING, STORAGE AND ANALYSIS, 2022,
  • [24] Beyond Procrustes: Balancing-free Gradient Descent for Asymmetric Low-Rank Matrix Sensing
    Ma, Cong
    Li, Yuanxin
    Chi, Yuejie
    CONFERENCE RECORD OF THE 2019 FIFTY-THIRD ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, 2019, : 721 - 725
  • [25] Beyond Procrustes: Balancing-Free Gradient Descent for Asymmetric Low-Rank Matrix Sensing
    Ma, Cong
    Li, Yuanxin
    Chi, Yuejie
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 : 867 - 877
  • [26] Accelerating SGD for Highly Ill-Conditioned Huge-Scale Online Matrix Completion
    Zhang, Gavin
    Chiu, Hong-Ming
    Zhang, Richard Y.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [27] Sample Efficient Reinforcement Learning via Low-Rank Matrix Estimation
    Shah, Devavrat
    Song, Dogyoon
    Xu, Zhi
    Yang, Yuzhe
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [28] DOA Estimation of Coherent Sources via Low-Rank Matrix Decomposition
    Yang, Zeqi
    Ma, Shuai
    Liu, Yiheng
    Zhang, Hua
    Lyu, Xiaode
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2024, 13 (11) : 3049 - 3053
  • [29] ACCELERATING ITERATIVE HARD THRESHOLDING FOR LOW-RANK MATRIX COMPLETION VIA ADAPTIVE RESTART
    Trung Vu
    Raich, Raviv
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 2917 - 2921
  • [30] Fast and Accurate Estimation of Low-Rank Matrices from Noisy Measurements via Preconditioned Non-Convex Gradient Descent
    Zhang, Gavin
    Chiu, Hong-Ming
    Zhang, Richard Y.
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238