Distributed Newton's Method for Network Cost Minimization

被引:16
作者
Cao, Xuanyu [1 ]
Liu, K. J. Ray [2 ]
机构
[1] Univ Illinois, Coordinated Sci Lab, Champaign, IL 61801 USA
[2] Univ Maryland, Dept Elect & Comp Engn, College Pk, MD 20742 USA
关键词
Newton method; Minimization; Convergence; Cost function; Approximation algorithms; Signal processing algorithms; Decentralized optimization; linear convergence; network cost minimization; network optimization; Newton' s methodquadratic convergence; OPTIMIZATION; CONSENSUS; ADMM; DECOMPOSITION; CONVERGENCE; ALGORITHMS;
D O I
10.1109/TAC.2020.2989266
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this article, we examine a novel generic network cost minimization problem, in which every node has a local decision vector to optimize. Each node incurs a cost associated with its decision vector, while each link incurs a cost related to the decision vectors of its two end nodes. All nodes collaborate to minimize the overall network cost. The formulated network cost minimization problem has broad applications in distributed signal processing and control, in which the notion of link costs often arises. To solve this problem in a decentralized manner, we develop a distributed variant of Newton's method, which possesses faster convergence than alternative first-order optimization methods such as gradient descent and alternating direction method of multipliers. The proposed method is based on an appropriate splitting of the Hessian matrix and an approximation of its inverse, which is used to determine the Newton step. Global linear convergence of the proposed algorithm is established under several standard technical assumptions on the local cost functions. Furthermore, analogous to classical centralized Newton's method, a quadratic convergence phase of the algorithm over a certain time interval is identified. Finally, numerical simulations are conducted to validate the effectiveness of the proposed algorithm and its superiority over other first-order methods, especially when the cost functions are ill-conditioned. Complexity issues of the proposed distributed Newton's method and alternative first-order methods are also discussed.
引用
收藏
页码:1278 / 1285
页数:8
相关论文
共 35 条
[1]   Distributed optimization and statistical learning via the alternating direction method of multipliers [J].
Boyd S. ;
Parikh N. ;
Chu E. ;
Peleato B. ;
Eckstein J. .
Foundations and Trends in Machine Learning, 2010, 3 (01) :1-122
[2]  
Boyd S., 2003, AUTUMN Q, V2004
[3]  
Boyd S. P., 2004, Convex Optimization
[4]   Distributed Linearized ADMM for Network Cost Minimization [J].
Cao, Xuanyu ;
Liu, K. J. Ray .
IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, 2018, 4 (03) :626-638
[5]  
Carli R, 2015, IEEE DECIS CONTR P, P418, DOI 10.1109/CDC.2015.7402236
[6]   A Proximal Dual Consensus ADMM Method for Multi-Agent Constrained Optimization [J].
Chang, Tsung-Hui .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2016, 64 (14) :3719-3734
[7]   Asynchronous Distributed ADMM for Large-Scale Optimization-Part I: Algorithm and Convergence Analysis [J].
Chang, Tsung-Hui ;
Hong, Mingyi ;
Liao, Wei-Cheng ;
Wang, Xiangfeng .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2016, 64 (12) :3118-3130
[8]   Asynchronous Distributed ADMM for Large-Scale Optimization-Part II: Linear Convergence Analysis and Numerical Performance [J].
Chang, Tsung-Hui ;
Liao, Wei-Cheng ;
Hong, Mingyi ;
Wang, Xiangfeng .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2016, 64 (12) :3131-3144
[9]   Multi-Agent Distributed Optimization via Inexact Consensus ADMM [J].
Chang, Tsung-Hui ;
Hong, Mingyi ;
Wang, Xiangfeng .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2015, 63 (02) :482-497
[10]   Multitask Diffusion Adaptation Over Networks [J].
Chen, Jie ;
Richard, Cedric ;
Sayed, Ali H. .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2014, 62 (16) :4129-4144