Conjugate Gradient and Variance Reduction Based Online ADMM for Low-Rank Distributed Networks

被引:0
|
作者
Chen, Yitong [1 ]
Jin, Danqi [2 ]
Chen, Jie [1 ]
Richard, Cedric [3 ]
Zhang, Wen [1 ]
机构
[1] Northwestern Polytech Univ, Ctr Intelligent Acoust & Immers Commun, Sch Marine Sci & Technol, Xian 710071, Peoples R China
[2] Wuhan Univ, Sch Elect Informat, Wuhan 430072, Peoples R China
[3] Univ Cote Dazur, CNRS, OCA, F-06000 Nice, France
基金
中国国家自然科学基金;
关键词
ADMM; conjugate gradient descent; distributed optimization; low-rank; variance reduction; DIFFUSION ADAPTATION; STRATEGIES; APPROXIMATION; COMBINATION; ALGORITHM; SPARSE;
D O I
10.1109/LSP.2025.3531200
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Modeling the relationships that may connect optimal parameter vectors is essential for the performance of parameter estimation methods in distributed networks. In this paper, we consider a low-rank relationship and introduce matrix factorization to promote this low-rank property. To devise a distributed algorithm that does not require any prior knowledge about the low-rank space, we first formulate local optimization problems at each node, which are subsequently addressed using the Alternating Direction Method of Multipliers (ADMM). Three subproblems naturally arise from ADMM, each resolved in an online manner with low computational costs. Specifically, the first one is solved using stochastic gradient descent (SGD), while the other two are handled using the conjugate gradient descent method to avoid matrix inversion operations. To further enhance performance, a variance reduction algorithm is incorporated into the SGD. Simulation results validate the effectiveness of the proposed algorithm.
引用
收藏
页码:706 / 710
页数:5
相关论文
共 50 条
  • [1] Variance Reduction in Gradient Exploration for Online Learning to Rank
    Wang, Huazheng
    Kim, Sonwoo
    McCord-Snook, Eric
    Wu, Qingyun
    Wang, Hongning
    PROCEEDINGS OF THE 42ND INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '19), 2019, : 835 - 844
  • [2] Approximation Conjugate Gradient Method for Low-Rank Matrix Recovery
    Chen, Zhilong
    Wang, Peng
    Zhu, Detong
    SYMMETRY-BASEL, 2024, 16 (05):
  • [3] Riemannian conjugate gradient method for low-rank tensor completion
    Duan, Shan-Qi
    Duan, Xue-Feng
    Li, Chun-Mei
    Li, Jiao-Fen
    ADVANCES IN COMPUTATIONAL MATHEMATICS, 2023, 49 (03)
  • [4] Riemannian conjugate gradient method for low-rank tensor completion
    Shan-Qi Duan
    Xue-Feng Duan
    Chun-Mei Li
    Jiao-Fen Li
    Advances in Computational Mathematics, 2023, 49
  • [5] Online learning over distributed low-rank networks via sequential power iteration
    Jin, Danqi
    Chen, Yitong
    Chen, Jie
    Huang, Gongping
    DIGITAL SIGNAL PROCESSING, 2025, 160
  • [6] PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization
    Vogels, Thijs
    Karinireddy, Sai Praneeth
    Jaggi, Martin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [7] A Unified Variance Reduction-Based Framework for Nonconvex Low-Rank Matrix Recovery
    Wang, Lingxiao
    Zhang, Xiao
    Gu, Quanquan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [8] Low-Rank Gradient Descent
    Cosson, Romain
    Jadbabaie, Ali
    Makur, Anuran
    Reisizadeh, Amirhossein
    Shah, Devavrat
    IEEE OPEN JOURNAL OF CONTROL SYSTEMS, 2023, 2 : 380 - 395
  • [9] A Distributed Conjugate Gradient Online Learning Method over Networks
    Xu, Cuixia
    Zhu, Junlong
    Shang, Youlin
    Wu, Qingtao
    COMPLEXITY, 2020, 2020
  • [10] Efficient Low-Rank Spectrotemporal Decomposition using ADMM
    Schamberg, Gabriel
    Ba, Demba
    Wagner, Mark
    Coleman, Todd
    2016 IEEE STATISTICAL SIGNAL PROCESSING WORKSHOP (SSP), 2016,