Distributed ADMM With Linear Updates Over Directed Networks

被引:0
作者
Rokade, Kiran [1 ]
Kalaimani, Rachel Kalpana [2 ]
机构
[1] Cornell Univ, Sch Elect & Comp Engn, Ithaca, NY 14850 USA
[2] Indian Inst Technol Madras, Dept Elect Engn, Chennai 600036, Tamil Nadu, India
来源
IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING | 2025年 / 12卷 / 02期
关键词
Convex functions; Optimization; Directed graphs; Convergence; Standards; Vectors; Linear programming; Approximation algorithms; Statistical learning; Robustness; Alternating Direction Method of Multipliers (ADMM); dual-ascent; directed graphs; distributed computing; parallel computing; OPTIMIZATION; CONVERGENCE;
D O I
10.1109/TNSE.2025.3529703
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Distributed optimization over a network of agents is ubiquitous in applications such as power system, robotics and statistical learning. In many settings, the communication network is directed, i.e., the communication links between agents are unidirectional. While several variations of gradient-descent-based primal methods have been proposed for distributed optimization over directed networks, an extension of dual-ascent methods to directed networks remains a less-explored area. In this paper, we propose a distributed version of the Alternating Direction Method of Multipliers (ADMM) with linear updates for directed networks using balancing weights, called BW-DADMM (Balancing Weights Directed ADMM). We show that if the objective function of the minimization problem is smooth and strongly convex, then BW-DADMM achieves a geometric rate of convergence to the optimal point. Our algorithm exploits the robustness inherent to ADMM by not enforcing accurate consensus, thereby significantly improving the convergence rate. We illustrate this by numerical examples, where we compare the performance of BW-DADMM with that of state-of-the-art ADMM methods over directed graphs.
引用
收藏
页码:1396 / 1407
页数:12
相关论文
共 25 条
[1]  
Boyd S., Parikh N., Chu E., Peleato B., Eckstein J., Distributed optimization and statistical learning via the alternating direction method of multipliers, Foundations Trends Mach. Learn., 3, 1, pp. 1-122, (2011)
[2]  
Eckstein J., Yao W., Approximate ADMM algorithms derived from lagrangian splitting, Comput. Optim. Appl., 68, 2, pp. 363-405, (2017)
[3]  
Falsone A., Notarnicola I., Notarstefano G., Prandini M., Tracking-ADMM for distributed constraint-coupled optimization, Automatica, 117
[4]  
Jiang W., Charalambous T., Fully distributed alternating direction method of multipliers in digraphs via finite-time termination mechanisms, 2021
[5]  
Jiang W., Grammenos A., Kalyvianaki E., Charalambous T., An asynchronous approximate distributed alternating direction method of multipliers in digraphs, Proc. IEEE 60th Conf. Decis. Control, 2021, pp. 3406-3413
[6]  
Kakade S., Shalev-Shwartz S., Tewari A., On the duality of strong convexity and strong smoothness: Learning applications and matrix regularization, (2009)
[7]  
Khatana V., Salapaka M.V., DC-DistADMM: ADMM algorithm for constrained optimization over directed graphs, IEEE Trans. Autom. Control, 68, 9, pp. 5365-5380
[8]  
Kia S.S., Van Scoy B., Cortes J., Freeman R.A., Lynch K.M., Martinez S., Tutorial on dynamic average consensus: The problem, its applications, and the algorithms, IEEE Control Syst. Mag., 39, 3, pp. 40-72, (2019)
[9]  
Li Y., Freris N.M., Voulgaris P., Stipanovic D., DN-ADMM: Distributed newton ADMM for multi-agent optimization, Proc. IEEE 60th Conf. Decis. Control, 2021, pp. 3343-3348
[10]  
Makhdoumi A., Ozdaglar A., Graph balancing for distributed sub-gradient methods over directed graphs, Proc. IEEE 54th Conf. Decis. Control, pp. 1364-1371, (2015)