On the Convergence of Nested Decentralized Gradient Methods With Multiple Consensus and Gradient Steps

被引:12
作者
Berahas, Albert S. [1 ]
Bollapragada, Raghu [2 ]
Wei, Ermin [3 ]
机构
[1] Univ Michigan, Dept Ind & Operat Engn, Ann Arbor, MI 48019 USA
[2] Univ Texas Austin, Operat Res & Ind Engn Program, Austin, TX 78712 USA
[3] Northwestern Univ, Dept Ind Engn Management Sci, Dept Elect & Comp Engn, Evanston, IL 60208 USA
关键词
Convergence; Optimization; Linear programming; Signal processing algorithms; Symmetric matrices; Eigenvalues and eigenfunctions; Convex functions; Distributed optimization; communication; optimization algorithms; network optimization; DISTRIBUTED SUBGRADIENT METHODS; COMMUNICATION; OPTIMIZATION; ALGORITHMS; NETWORKS;
D O I
10.1109/TSP.2021.3094906
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, we consider minimizing a sum of local convex objective functions in a distributed setting, where the cost of communication and/or computation can be expensive. We extend and generalize the analysis for a class of nested gradient-based distributed algorithms [NEAR-DGD, (Berahas et al., 2019)] to account for multiple gradient steps at every iteration. We show the effect of performing multiple gradient steps on the rate of convergence and on the size of the neighborhood of convergence, and prove R-Linear convergence to the exact solution with a fixed number of gradient steps and increasing number of consensus steps. We test the performance of the generalized method on quadratic functions and show the effect of multiple consensus and gradient steps in terms of iterations, number of gradient evaluations, number of communications and cost.
引用
收藏
页码:4192 / 4203
页数:12
相关论文
共 60 条
[1]  
[Anonymous], 2013, DIFFUSION ADAPTATION
[2]  
Berahas AS, 2019, IEEE DECIS CONTR P, P1519, DOI 10.1109/CDC40024.2019.9029541
[3]   Balancing Communication and Computation in Distributed Optimization [J].
Berahas, Albert S. ;
Bollapragada, Raghu ;
Keskar, Nitish Shirish ;
Wei, Ermin .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2019, 64 (08) :3141-3155
[4]  
Bertsekas D. P., 1989, Parallel and distributed computation: Numerical methods
[5]  
Bonawitz K., 2019, Proc. Mach. Learning Syst.
[6]   Distributed optimization and statistical learning via the alternating direction method of multipliers [J].
Boyd S. ;
Parikh N. ;
Chu E. ;
Peleato B. ;
Eckstein J. .
Foundations and Trends in Machine Learning, 2010, 3 (01) :1-122
[7]   An Overview of Recent Progress in the Study of Distributed Multi-Agent Coordination [J].
Cao, Yongcan ;
Yu, Wenwu ;
Ren, Wei ;
Chen, Guanrong .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2013, 9 (01) :427-438
[8]  
Chen AI, 2012, ANN ALLERTON CONF, P601, DOI 10.1109/Allerton.2012.6483273
[9]  
Chen Ricky T. Q., 2018, Advances in Neural Information Processing Systems, V31
[10]  
Chow YT, 2016, CONF REC ASILOMAR C, P1715, DOI 10.1109/ACSSC.2016.7869675