A Comprehensively Tight Analysis of Gradient Descent for PCA

被引:0
作者
Xu, Zhiqiang [1 ]
Li, Ping
机构
[1] Baidu Res, Cognit Comp Lab, 10 Xibeiwang East Rd, Beijing 100193, Peoples R China
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021) | 2021年 / 34卷
关键词
OPTIMIZATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We study the Riemannian gradient method for PCA on which a crucial fact is that despite the simplicity of the considered setting, i.e., deterministic version of Krasulina's method, the convergence rate has not been well-understood yet. In this work, we provide a general tight analysis for the gap-dependent rate at O(1/Delta log 1/is an element of) that holds for any real symmetric matrix. More importantly, when the gap is significantly smaller than the target accuracy. on the objective suboptimality of the final solution, the rate of this type is actually not tight any more, which calls for a worst-case rate. We further give the first worst-case analysis that achieves a rate of convergence at O(1/is an element of log 1/is an element of. The two analyses naturally roll out a comprehensively tight convergence rate at O(1/max{Delta,}log 1/is an element of). Particularly, our gap-dependent analysis suggests a new promising learning rate for stochastic variance reduced PCA algorithms. Experiments are conducted to confirm our findings as well.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Efficiency Ordering of Stochastic Gradient Descent
    Hu, Jie
    Doshi, Vishwaraj
    Eun, Do Young
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [22] Stochastic Gradient Descent on Riemannian Manifolds
    Bonnabel, Silvere
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2013, 58 (09) : 2217 - 2229
  • [23] Convergence Analysis of Distributed Gradient Descent Algorithms With One and Two Momentum Terms
    Liu, Bing
    Chai, Li
    Yi, Jingwen
    IEEE TRANSACTIONS ON CYBERNETICS, 2024, 54 (03) : 1511 - 1522
  • [24] Projective Approximation Based Gradient Descent Modification
    Senov, Alexander
    Granichin, Oleg
    IFAC PAPERSONLINE, 2017, 50 (01): : 3899 - 3904
  • [25] Projective Fisher Information for Natural Gradient Descent
    Kaul P.
    Lall B.
    IEEE Transactions on Artificial Intelligence, 2023, 4 (02): : 304 - 314
  • [26] Normalized Gradient Descent for Variational Quantum Algorithms
    Suzuki, Yudai
    Yano, Hiroshi
    Raymond, Rudy
    Yamamoto, Naoki
    2021 IEEE INTERNATIONAL CONFERENCE ON QUANTUM COMPUTING AND ENGINEERING (QCE 2021) / QUANTUM WEEK 2021, 2021, : 1 - 9
  • [27] On Faster Convergence of Scaled Sign Gradient Descent
    Li, Xiuxian
    Lin, Kuo-Yi
    Li, Li
    Hong, Yiguang
    Chen, Jie
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (02) : 1732 - 1741
  • [28] Stochastic Gradient Descent for Nonconvex Learning Without Bounded Gradient Assumptions
    Lei, Yunwen
    Hu, Ting
    Li, Guiying
    Tang, Ke
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (10) : 4394 - 4400
  • [29] Practical Efficiency of Asynchronous Stochastic Gradient Descent
    Bhardwaj, Onkar
    Cong, Guojing
    PROCEEDINGS OF 2016 2ND WORKSHOP ON MACHINE LEARNING IN HPC ENVIRONMENTS (MLHPC), 2016, : 56 - 62
  • [30] Online Gradient Descent for Linear Dynamical Systems
    Nonhoff, Marko
    Mueller, Matthias A.
    IFAC PAPERSONLINE, 2020, 53 (02): : 945 - 952