Local Prediction for Enhanced Convergence of Distributed Optimization Algorithms

被引:16
作者
Mai, Van Sy [1 ,2 ]
Abed, Eyad H. [1 ,2 ]
机构
[1] Univ Maryland, Dept Elect & Comp Engn, College Pk, MD 20742 USA
[2] Univ Maryland, Inst Syst Res, College Pk, MD 20742 USA
来源
IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS | 2018年 / 5卷 / 04期
关键词
Distributed algorithms; gradient methods; graph theory; network consensus; SUBGRADIENT METHODS; CONVEX-OPTIMIZATION; CONSENSUS;
D O I
10.1109/TCNS.2017.2776084
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper studies distributed optimization problems where a network of agents seeks to minimize the sum of their private cost functions. We propose new algorithms based on the distributed subgradient method and the finite-time consensus protocol introduced by Sundaram and Hadjicostis (2007). In our first algorithm, the local optimization variables are updated cyclically through a subgradient step while the consensus variables follow a usual consensus protocol periodically interrupted by a predictive consensus estimate reset operation. For convex cost functions with bounded subgradients, this algorithm is guaranteed to converge to a certain range of the optimal value if using a constant step size or to the optimal value if a diminishing step size is in place. For differentiable cost functions whose sum is convex and has a Lipschitz continuous gradient, convergence to the optimal value can be ensured when using a constant step size, even if some of the individual cost functions are nonconvex. In addition, exponential convergence to the optimal solution is achieved when the global cost function is further assumed to be strongly convex. In these cases, the local optimization variables reach consensus in finite time, and then behave as they would under the centralized subgradient method applied to the global problem, except on a slower time scale. The second algorithm is specialized for the case of quadratic cost functions and converges in finite time to the optimal solution. Simulation examples are given to illustrate the algorithms.
引用
收藏
页码:1962 / 1975
页数:14
相关论文
共 40 条
[1]  
[Anonymous], 2003, INTRO LECT CONVEX OP
[2]  
[Anonymous], 1999, Athena scientific Belmont
[3]  
[Anonymous], 2001, Algebraic graph theory
[4]  
[Anonymous], [No title captured]
[5]  
[Anonymous], 2012, THESIS
[6]   Consensus-based distributed sensor calibration and least-square parameter identification in WSNs [J].
Bolognani, Saverio ;
Del Favero, Simone ;
Schenato, Luca ;
Varagnolo, Damiano .
INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2010, 20 (02) :176-193
[7]  
Boyd S., 2003, Lecture notes of EE392o, Stanford University, Autumn Quarter
[8]  
Brouwer A.E., 1989, DISTANCE REGULAR GRA, V18
[9]   Distributed Finite-Time Computation of Digraph Parameters: Left-Eigenvector, Out-Degree and Spectrum [J].
Charalambous, Themistoklis ;
Rabbat, Michael G. ;
Johansson, Mikael ;
Hadjicostis, Christoforos N. .
IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, 2016, 3 (02) :137-148
[10]   Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling [J].
Duchi, John C. ;
Agarwal, Alekh ;
Wainwright, Martin J. .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2012, 57 (03) :592-606