On the O(1/k) Convergence of Distributed Gradient Methods Under Random Quantization

被引:0
作者
Dutta, Amit [1 ]
Doan, Thinh T. [2 ]
机构
[1] Virginia Tech, Elect & Comp Engn Dept, Blacksburg, VA 24061 USA
[2] Univ Texas Austin, Aerosp Engn & Engn Mech Dept, Austin, TX 78712 USA
来源
IEEE CONTROL SYSTEMS LETTERS | 2024年 / 8卷
关键词
Distributed optimization; quantized communication; two-time-scale stochastic approximation;
D O I
10.1109/LCSYS.2024.3519013
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We revisit the so-called distributed two-time scale stochastic gradient method for solving a strongly convex optimization problem over a network of agents in a bandwidth-limited regime. In this setting, the agents can only exchange the quantized values of their local variables using a limited number of communication bits. Due to quantization errors, the existing best-known convergence results of this method can only achieve a suboptimal rate O(1/root k ), while the optimal rate is O(1/k ) under no quantization, where k is the time iteration. The main contribution of this letter is to address this theoretical gap, where we study a sufficient condition and develop an innovative analysis and step-size selection to achieve the optimal convergence rate O(1/k ) for the distributed gradient methods given any number of quantization bits. We provide numerical simulations to illustrate the effectiveness of our theoretical results.
引用
收藏
页码:2967 / 2972
页数:6
相关论文
共 13 条
[1]  
Nedic A., Olshevsky A., Rabbat M.G., Network topology and communication-computation tradeoffs in decentralized optimization, Proc. IEEE, 106, 5, pp. 953-976, (2018)
[2]  
Doan T.T., Maguluri S.T., Romberg J., Convergence rates of distributed gradient methods under random quantization: A stochastic approximation approach, IEEE Trans. Autom. Control, 66, 10, pp. 4469-4484, (2021)
[3]  
Vasconcelos M.M., Doan T.T., Mitra U., Improved convergence rate for a distributed two-time-scale gradient method under random quantization, Proc. 60th IEEE Conf. Decis. Control (CDC), pp. 3117-3122, (2021)
[4]  
Nedic A., Ozdaglar A., Distributed subgradient methods for multiagent optimization, IEEE Trans. Autom. Control, 54, 1, pp. 48-61, (2009)
[5]  
Tsitsiklis J., Bertsekas D., Athans M., Distributed asynchronous deterministic and stochastic gradient optimization algorithms, IEEE Trans. Autom. Control, 31, 9, pp. 803-812, (1986)
[6]  
Nedic A., Olshevsky A., Ozdaglar A., Tsitsiklis J.N., On distributed averaging algorithms and quantization effects, IEEE Trans. Autom. Control, 54, 11, pp. 2506-2517, (2009)
[7]  
Rikos A.I., Hadjicostis C.N., Distributed average consensus under quantized communication via event-triggered mass summation, Proc. IEEE Conf. Decis. Control (CDC), pp. 894-899, (2018)
[8]  
Doan T.T., Maguluri S.T., Romberg J., Fast convergence rates of distributed subgradient methods with adaptive quantization, IEEE Trans. Autom. Control, 66, 5, pp. 2191-2205, (2021)
[9]  
Michelusi N., Scutari G., Lee C.-S., Finite-bit quantization for distributed algorithms with linear convergence, IEEE Trans. Inf. Theory, 68, 11, pp. 7254-7280, (2022)
[10]  
Reisizadeh A., Mokhtari A., Hassani H., Pedarsani R., An exact quantized decentralized gradient descent algorithm, IEEE Trans. Signal Process., 67, 19, pp. 4934-4947, (2019)