On the Linear Convergence of Two Decentralized Algorithms

被引:1
作者
Li, Yao [1 ,2 ]
Yan, Ming [1 ,2 ]
机构
[1] Michigan State Univ, Dept Math, E Lansing, MI 48824 USA
[2] Michigan State Univ, Dept Computat Math Sci & Engn, E Lansing, MI 48824 USA
关键词
Decentralized optimization; EXTRA; NIDS; Mixing matrix; Linear convergence;
D O I
10.1007/s10957-021-01833-y
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
Decentralized algorithms solve multi-agent problems over a connected network, where the information can only be exchanged with the accessible neighbors. Though there exist several decentralized optimization algorithms, there are still gaps in convergence conditions and rates between decentralized and centralized algorithms. In this paper, we fill some gaps by considering two decentralized algorithms: EXTRA and NIDS. They both converge linearly with strongly convex objective functions. We will answer two questions regarding them. What are the optimal upper bounds for their stepsizes? Do decentralized algorithms require more properties on the functions for linear convergence than centralized ones? More specifically, we relax the required conditions for linear convergence of both algorithms. For EXTRA, we show that the stepsize is comparable to that of centralized algorithms. For NIDS, the upper bound of the stepsize is shown to be exactly the same as the centralized ones. In addition, we relax the requirement for the objective functions and the mixing matrices. We provide the linear convergence results for both algorithms under the weakest conditions.
引用
收藏
页码:271 / 290
页数:20
相关论文
共 37 条
[1]  
Nedic A., Ozdaglar A., Distributed subgradient methods for multi-agent optimization, IEEE Trans. Autom. Control, 54, pp. 48-61, (2009)
[2]  
Ram S.S., Nedic A., Veeravalli V.V., Distributed stochastic subgradient projection algorithms for convex optimization, J. Optim. Theory Appl., 147, 3, pp. 516-545, (2010)
[3]  
Nedic A., Asynchronous broadcast-based convex optimization over a network, IEEE Trans. Autom. Control, 56, 6, pp. 1337-1351, (2010)
[4]  
Yuan K., Ling Q., Yin W., On the convergence of decentralized gradient descent, SIAM J. Optim., 26, 3, pp. 1835-1854, (2016)
[5]  
Jakovetic D., Xavier J., Moura J., Fast distributed gradient methods, IEEE Trans. Autom. Control, 59, pp. 1131-1146, (2014)
[6]  
Shi W., Ling Q., Yuan K., Wu G., Yin W., On the linear convergence of the ADMM in decentralized consensus optimization, IEEE Trans. Signal Process., 62, 7, pp. 1750-1761, (2014)
[7]  
Glowinski R., Marroco A., Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problèmes de dirichlet non linéaires, ESAIM: Mathematical Modelling and Numerical Analysis-Modélisation Mathématique Et Analyse Numérique, 9, R2, pp. 41-76, (1975)
[8]  
Boyd S., Parikh N., Chu E., Peleato B., Eckstein J., Et al., Distributed optimization and statistical learning via the alternating direction method of multipliers, Mach. Learn., 3, 1, pp. 1-122, (2011)
[9]  
Wei E., Ozdaglar A., On the O (1 / k) convergence of asynchronous distributed alternating direction method of multipliers, Global Conference on Signal and Information Processing (Globalsip), pp. 551-554, (2013)
[10]  
Chang T.H., Hong M., Wang X., Multi-agent distributed optimization via inexact consensus ADMM, IEEE Trans. Signal Process., 63, 2, pp. 482-497, (2015)