S-NEAR-DGD: A Flexible Distributed Stochastic Gradient Method for Inexact Communication

被引:0
|
作者
Iakovidou, Charikleia [1 ]
Wei, Ermin [1 ]
机构
[1] Northwestern Univ, Dept Elect & Comp Engn, Evanston, IL 60208 USA
关键词
Optimization; Convergence; Distributed algorithms; Radio frequency; Approximation algorithms; Quantization (signal); Probabilistic logic; Distributed optimization; network optimization; quantization; stochastic optimization; MULTIAGENT OPTIMIZATION; SUBGRADIENT METHODS; ALGORITHMS; CONVERGENCE; CONSENSUS;
D O I
10.1109/TAC.2022.3151734
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We present and analyze a stochastic distributed method (S-NEAR-DGD) that can tolerate inexact computation and inaccurate information exchange to alleviate the problems of costly gradient evaluations and bandwidth-limited communication in large-scale systems. Our method is based on a class of flexible, distributed first-order algorithms that allow for the tradeoff of computation and communication to best accommodate the application setting. We assume that the information exchanged between nodes is subject to random distortion and that only stochastic approximations of the true gradients are available. Our theoretical results prove that the proposed algorithm converges linearly in expectation to a neighborhood of the optimal solution for strongly convex objective functions with Lipschitz gradients. We characterize the dependence of this neighborhood on algorithm and network parameters, the quality of the communication channel and the precision of the stochastic gradient approximations used. Finally, we provide numerical results to evaluate the empirical performance of our method.
引用
收藏
页码:1281 / 1287
页数:7
相关论文
共 28 条
  • [1] Communication-Censored Distributed Stochastic Gradient Descent
    Li, Weiyu
    Wu, Zhaoxian
    Chen, Tianyi
    Li, Liping
    Ling, Qing
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (11) : 6831 - 6843
  • [2] An accelerated distributed stochastic gradient method with momentum
    Huang, Kun
    Pu, Shi
    Nedic, Angelia
    MATHEMATICAL PROGRAMMING, 2025,
  • [3] A Distributed Stochastic Gradient Tracking Method
    Pu, Shi
    Nedic, Angelia
    2018 IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2018, : 963 - 968
  • [4] S-DIGing: A Stochastic Gradient Tracking Algorithm for Distributed Optimization
    Li, Huaqing
    Zheng, Lifeng
    Wang, Zheng
    Yan, Yu
    Feng, Liping
    Guo, Jing
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2022, 6 (01): : 53 - 65
  • [5] Stochastic Intermediate Gradient Method for Convex Problems with Stochastic Inexact Oracle
    Dvurechensky, Pavel
    Gasnikov, Alexander
    JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2016, 171 (01) : 121 - 145
  • [6] Distributed Stochastic Gradient Descent with Event-Triggered Communication
    George, Jemin
    Gurram, Prudhvi
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 7169 - 7178
  • [7] Distributed and Inexact Proximal Gradient Method for Online Convex Optimization
    Bastianello, Nicola
    Dall'Anese, Emiliano
    2021 EUROPEAN CONTROL CONFERENCE (ECC), 2021, : 2432 - 2437
  • [8] Stochastic Intermediate Gradient Method for Convex Problems with Stochastic Inexact Oracle
    Pavel Dvurechensky
    Alexander Gasnikov
    Journal of Optimization Theory and Applications, 2016, 171 : 121 - 145
  • [9] A Communication-Efficient Stochastic Gradient Descent Algorithm for Distributed Nonconvex Optimization
    Xie, Antai
    Yi, Xinlei
    Wang, Xiaofan
    Cao, Ming
    Ren, Xiaoqiang
    2024 IEEE 18TH INTERNATIONAL CONFERENCE ON CONTROL & AUTOMATION, ICCA 2024, 2024, : 609 - 614
  • [10] Inexact proximal stochastic gradient method for convex composite optimization
    Wang, Xiao
    Wang, Shuxiong
    Zhang, Hongchao
    COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2017, 68 (03) : 579 - 618