S-NEAR-DGD: A Flexible Distributed Stochastic Gradient Method for Inexact Communication

被引:0
|
作者
Iakovidou, Charikleia [1 ]
Wei, Ermin [1 ]
机构
[1] Northwestern Univ, Dept Elect & Comp Engn, Evanston, IL 60208 USA
关键词
Optimization; Convergence; Distributed algorithms; Radio frequency; Approximation algorithms; Quantization (signal); Probabilistic logic; Distributed optimization; network optimization; quantization; stochastic optimization; MULTIAGENT OPTIMIZATION; SUBGRADIENT METHODS; ALGORITHMS; CONVERGENCE; CONSENSUS;
D O I
10.1109/TAC.2022.3151734
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We present and analyze a stochastic distributed method (S-NEAR-DGD) that can tolerate inexact computation and inaccurate information exchange to alleviate the problems of costly gradient evaluations and bandwidth-limited communication in large-scale systems. Our method is based on a class of flexible, distributed first-order algorithms that allow for the tradeoff of computation and communication to best accommodate the application setting. We assume that the information exchanged between nodes is subject to random distortion and that only stochastic approximations of the true gradients are available. Our theoretical results prove that the proposed algorithm converges linearly in expectation to a neighborhood of the optimal solution for strongly convex objective functions with Lipschitz gradients. We characterize the dependence of this neighborhood on algorithm and network parameters, the quality of the communication channel and the precision of the stochastic gradient approximations used. Finally, we provide numerical results to evaluate the empirical performance of our method.
引用
收藏
页码:1281 / 1287
页数:7
相关论文
共 28 条