DC-DistADMM: ADMM Algorithm for Constrained Optimization Over Directed Graphs

被引:13
作者
Khatana, Vivek [1 ]
Salapaka, Murti V. [1 ]
机构
[1] Univ Minnesota, Dept Elect & Comp Engn, Minneapolis, MN 55455 USA
关键词
Alternating direction method of multipliers (ADMM); constrained optimization; directed graphs; finite-time consensus; distributed optimization; multiagent networks; MULTIAGENT DISTRIBUTED OPTIMIZATION; CONSENSUS ALGORITHMS; CONVEX-OPTIMIZATION; LINEAR CONVERGENCE; GRADIENT METHODS; COMMUNICATION; COMPUTATION; FRAMEWORK;
D O I
10.1109/TAC.2022.3221856
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This article reports an algorithm formultiagent distributed optimization problems with a common decision variable, local linear equality, and inequality, constraints and set constraints with convergence rate guarantees. The algorithm accrues all the benefits of the alternating direction method of multipliers (ADMM) approach. It also overcomes the limitations of existing methods on convex optimization problems with linear inequality, equality, and set constraints by allowing directed communication topologies. Moreover, the algorithm can be synthesized distributively. The developed algorithm has: first, a O(1/k) rate of convergence, where k is the iteration counter, when individual functions are convex but not-necessarily differentiable, and second, a geometric rate of convergence to any arbitrary small neighborhood of the optimal solution, when the objective functions are smooth and restricted strongly convex at the optimal solution. The efficacy of the algorithm is evaluated by a comparison with state-of-theart constrained optimization algorithms in solving a constrained distributed l(1) -regularized logistic regression problem, and unconstrained optimization algorithms in solving a l(1)-regularized Huber loss minimization problem. Additionally, a comparison of the algorithm's performance with other algorithms in the literature that utilize multiple communication steps is provided.
引用
收藏
页码:5365 / 5380
页数:16
相关论文
共 73 条
[1]  
Bach F, 2014, J MACH LEARN RES, V15, P595
[2]   A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems [J].
Beck, Amir ;
Teboulle, Marc .
SIAM JOURNAL ON IMAGING SCIENCES, 2009, 2 (01) :183-202
[3]   Weighted Gossip: Distributed Averaging Using Non-Doubly Stochastic Matrices [J].
Benezit, Florence ;
Blondel, Vincent ;
Thiran, Patrick ;
Tsitsiklis, John ;
Vetterli, Martin .
2010 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY, 2010, :1753-1757
[4]   On the Convergence of Nested Decentralized Gradient Methods With Multiple Consensus and Gradient Steps [J].
Berahas, Albert S. ;
Bollapragada, Raghu ;
Wei, Ermin .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 (69) :4192-4203
[5]   Balancing Communication and Computation in Distributed Optimization [J].
Berahas, Albert S. ;
Bollapragada, Raghu ;
Keskar, Nitish Shirish ;
Wei, Ermin .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2019, 64 (08) :3141-3155
[6]  
Bertsekas D., 1997, PARALLEL DISTRIBUTED
[7]  
Bertsekas D.P., 1995, NONLINEAR PROGRAMMIN, DOI [10.1057/palgrave.jors.2600425, DOI 10.1057/PALGRAVE.JORS.2600425]
[8]   Distributed optimization and statistical learning via the alternating direction method of multipliers [J].
Boyd S. ;
Parikh N. ;
Chu E. ;
Peleato B. ;
Eckstein J. .
Foundations and Trends in Machine Learning, 2010, 3 (01) :1-122
[9]   Multi-Agent Distributed Optimization via Inexact Consensus ADMM [J].
Chang, Tsung-Hui ;
Hong, Mingyi ;
Wang, Xiangfeng .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2015, 63 (02) :482-497
[10]  
Chen A. I.-A., 2012, Fast distributed first-order methods