A divide-and-conquer algorithm for distributed optimization on networks

被引:3
作者
Emirov, Nazar [1 ]
Song, Guohui [2 ]
Sun, Qiyu [3 ]
机构
[1] Boston Coll, Dept Comp Sci, Chestnut Hill, MA 02467 USA
[2] Old Dominion Univ, Dept Math & Stat, Norfolk, VA 23529 USA
[3] Univ Cent Florida, Dept Math, Orlando, FL 32816 USA
基金
美国国家科学基金会;
关键词
Divide-and-conquer algorithm; Distributed optimization; Graph signal processing; SENSOR NETWORKS; CONVERGENCE; CONSENSUS; ADMM;
D O I
10.1016/j.acha.2023.101623
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
In this paper, we consider networks with topologies described by some connected undirected graph G = (V, E) and with some agents (fusion centers) equipped with processing power and local peer-to-peer communication, and optimization problem mini {F(i) = n-ary sumation ������is an element of V f ������(i)} with local objective functions f ������ depending only on neighboring variables of the vertex ������ is an element of V. We introduce a divide-and-conquer algorithm to solve the above optimization problem in a distributed and decentralized manner. The proposed divide-and-conquer algorithm has exponential convergence, its computational cost is almost linear with respect to the size of the network, and it can be fully implemented at fusion centers of the network. In addition, our numerical demonstrations indicate that the proposed divide-and-conquer algorithm has superior performance than popular decentralized optimization methods in solving the least squares problem, both with and without the ������1 penalty, and exhibits great performance on networks equipped with asynchronous local peer-to-peer communication.
引用
收藏
页数:19
相关论文
共 59 条
[1]   Wireless sensor networks: a survey [J].
Akyildiz, IF ;
Su, W ;
Sankarasubramaniam, Y ;
Cayirci, E .
COMPUTER NETWORKS, 2002, 38 (04) :393-422
[2]  
[Anonymous], 2011, InAdvancesinNeuralInformationProcessingSystems24
[3]  
Atallah E., 2022, CoDGraD: a code-based distributed gradient descent scheme for Decentralized Convex Optimization
[4]   Balancing Stragglers Against Staleness in Distributed Deep Learning [J].
Basu, Saurav ;
Saxena, Vaibhav ;
Panja, Rintu ;
Verma, Ashish .
2018 IEEE 25TH INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING (HIPC), 2018, :12-21
[5]  
Bertsekas D.P., 1989, Parallel and distributed computation: numerical methods
[6]   Distributed optimization and statistical learning via the alternating direction method of multipliers [J].
Boyd S. ;
Parikh N. ;
Chu E. ;
Peleato B. ;
Eckstein J. .
Foundations and Trends in Machine Learning, 2010, 3 (01) :1-122
[7]   Fastest mixing Markov chain on a graph [J].
Boyd, S ;
Diaconis, P ;
Xiao, L .
SIAM REVIEW, 2004, 46 (04) :667-689
[8]   Geometric Deep Learning Going beyond Euclidean data [J].
Bronstein, Michael M. ;
Bruna, Joan ;
LeCun, Yann ;
Szlam, Arthur ;
Vandergheynst, Pierre .
IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (04) :18-42
[9]  
Cattivelli FS, 2007, 2007 IEEE 8TH WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATIONS, VOLS 1 AND 2, P600
[10]   Diffusion LMS Strategies for Distributed Estimation [J].
Cattivelli, Federico S. ;
Sayed, Ali H. .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2010, 58 (03) :1035-1048