On the O(1/k) Convergence of Distributed Gradient Methods Under Random Quantization

被引:0
作者
Dutta, Amit [1 ]
Doan, Thinh T. [2 ]
机构
[1] Virginia Tech, Elect & Comp Engn Dept, Blacksburg, VA 24061 USA
[2] Univ Texas Austin, Aerosp Engn & Engn Mech Dept, Austin, TX 78712 USA
来源
IEEE CONTROL SYSTEMS LETTERS | 2024年 / 8卷
关键词
Distributed optimization; quantized communication; two-time-scale stochastic approximation;
D O I
10.1109/LCSYS.2024.3519013
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We revisit the so-called distributed two-time scale stochastic gradient method for solving a strongly convex optimization problem over a network of agents in a bandwidth-limited regime. In this setting, the agents can only exchange the quantized values of their local variables using a limited number of communication bits. Due to quantization errors, the existing best-known convergence results of this method can only achieve a suboptimal rate O(1/root k ), while the optimal rate is O(1/k ) under no quantization, where k is the time iteration. The main contribution of this letter is to address this theoretical gap, where we study a sufficient condition and develop an innovative analysis and step-size selection to achieve the optimal convergence rate O(1/k ) for the distributed gradient methods given any number of quantization bits. We provide numerical simulations to illustrate the effectiveness of our theoretical results.
引用
收藏
页码:2967 / 2972
页数:6
相关论文
共 13 条
[1]  
Borkar VS., 2008, Stochastic Approximation: A Dynamical Systems Viewpoint
[2]  
Doan TT, 2021, Arxiv, DOI arXiv:2011.01868
[3]   Convergence Rates of Distributed Gradient Methods Under Random Quantization: A Stochastic Approximation Approach [J].
Doan, Thinh T. ;
Maguluri, Siva Theja ;
Romberg, Justin .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2021, 66 (10) :4469-4484
[4]   Fast Convergence Rates of Distributed Subgradient Methods With Adaptive Quantization [J].
Doan, Thinh T. ;
Maguluri, Siva Theja ;
Romberg, Justin .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2021, 66 (05) :2191-2205
[5]  
Kovalev D, 2021, PR MACH LEARN RES, V130
[6]   Finite-Bit Quantization for Distributed Algorithms With Linear Convergence [J].
Michelusi, Nicolo ;
Scutari, Gesualdo ;
Lee, Chang-Shen .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2022, 68 (11) :7254-7280
[7]   Network Topology and Communication-Computation Tradeoffs in Decentralized Optimization [J].
Nedic, Angelia ;
Olshevsky, Alex ;
Rabbat, Michael G. .
PROCEEDINGS OF THE IEEE, 2018, 106 (05) :953-976
[8]   On Distributed Averaging Algorithms and Quantization Effects [J].
Nedic, Angelia ;
Olshevsky, Alex ;
Ozdaglar, Asuman ;
Tsitsiklis, John N. .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2009, 54 (11) :2506-2517
[9]   Distributed Subgradient Methods for Multi-Agent Optimization [J].
Nedic, Angelia ;
Ozdaglar, Asurrian .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2009, 54 (01) :48-61
[10]   An Exact Quantized Decentralized Gradient Descent Algorithm [J].
Reisizadeh, Amirhossein ;
Mokhtari, Aryan ;
Hassani, Hamed ;
Pedarsani, Ramtin .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2019, 67 (19) :4934-4947