Constrained Consensus and Optimization in Multi-Agent Networks

被引:1518
作者
Nedic, Angelia [1 ]
Ozdaglar, Asuman [2 ]
Parrilo, Pablo A. [2 ]
机构
[1] Univ Illinois, Ind & Enterprise Syst Engn Dept, Urbana, IL 61801 USA
[2] MIT, Lab Informat & Decis Syst, Dept Elect Engn & Comp Sci, Cambridge, MA 02139 USA
基金
美国国家科学基金会;
关键词
Consensus; constraints; distributed optimization; subgradient algorithms; DISTRIBUTED CONSENSUS; SUBGRADIENT METHODS; CONVERGENCE; AGENTS;
D O I
10.1109/TAC.2010.2041686
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We present distributed algorithms that can be used by multiple agents to align their estimates with a particular value over a network with time-varying connectivity. Our framework is general in that this value can represent a consensus value among multiple agents or an optimal solution of an optimization problem, where the global objective function is a combination of local agent objective functions. Our main focus is on constrained problems where the estimates of each agent are restricted to lie in different convex sets. To highlight the effects of constraints, we first consider a constrained consensus problem and present a distributed "projected consensus algorithm" in which agents combine their local averaging operation with projection on their individual constraint sets. This algorithm can be viewed as a version of an alternating projection method with weights that are varying over time and across agents. We establish convergence and convergence rate results for the projected consensus algorithm. We next study a constrained optimization problem for optimizing the sum of local objective functions of the agents subject to the intersection of their local constraint sets. We present a distributed "projected subgradient algorithm" which involves each agent performing a local averaging operation, taking a subgradient step to minimize its own objective function, and projecting on its constraint set. We show that, with an appropriately selected stepsize rule, the agent estimates generated by this algorithm converge to the same optimal solution for the cases when the weights are constant and equal, and when the weights are time-varying but all agents have the same constraint set.
引用
收藏
页码:922 / 938
页数:17
相关论文
共 30 条
[1]  
[Anonymous], 2007, Finite-dimensional variational inequalities and complementarity problems
[2]   THEORY OF REPRODUCING KERNELS [J].
ARONSZAJN, N .
TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY, 1950, 68 (MAY) :337-404
[3]  
Bertsekas D., 2003, Convex Analysis and Optimization
[4]  
Bertsekas D.P., 1989, PARALLEL DISTRIBUTED
[5]   A convergent incremental gradient method with a constant step size [J].
Blatt, Doron ;
Hero, Alfred O. ;
Gauchman, Hillel .
SIAM JOURNAL ON OPTIMIZATION, 2007, 18 (01) :29-51
[6]  
Blondel VD, 2005, IEEE DECIS CONTR P, P2996
[7]  
Boyd S, 2005, IEEE INFOCOM SER, P1653
[8]  
Cao M, 2005, IEEE DECIS CONTR P, P2356
[9]  
Deutsch F., 1983, PARAMETRIC OPTIMIZAT, V72, P96
[10]   The rate of convergence for the cyclic projections algorithm I: Angles between convex sets [J].
Deutsch, Frank ;
Hundal, Hein .
JOURNAL OF APPROXIMATION THEORY, 2006, 142 (01) :36-55