Barzilai-Borwein gradient tracking method for distributed optimization over directed networks

被引:0
作者
Gao J. [1 ]
Liu X.-E. [2 ]
机构
[1] School of Artificial Intelligence, Hebei University of Technology, Tianjin
[2] Institute of Mathematics, Hebei University of Technology, Tianjin
来源
Kongzhi Lilun Yu Yingyong/Control Theory and Applications | 2023年 / 40卷 / 09期
基金
中国国家自然科学基金;
关键词
Barzilai-Borwein method; convergence rate; directed graphs; distributed optimization; multi-agent systems; optimization algorithm;
D O I
10.7641/CTA.2022.20125
中图分类号
学科分类号
摘要
This paper studies the distributed optimization problem over directed networks. The global objective function of this problem is the average of all smooth and strongly convex local objective functions on the networks. Motivated by the capability of Barzilai-Borwein step sizes in improving the performance of gradient methods, a distributed Barzilai-Borwein gradient tracking method is proposed. Different from the distributed gradient algorithms using fixed step sizes in the literature, the proposed method allows each agent to calculate its step size automatically using its local gradient information. By using row- and column-stochastic weights simultaneously, the method can avoid the computation and communication on eigenvector estimation. It is proved that the iterative sequence generated by the proposed method converges linearly to the optimal solution for smooth and strongly convex functions. Simulation results on the distributed logistic regression problem show that the proposed method performs better than some advanced distributed gradient algorithms with fixed step sizes. © 2023 South China University of Technology. All rights reserved.
引用
收藏
页码:1637 / 1645
页数:8
相关论文
共 41 条
[11]  
SHI W, LING Q, WU G, Et al., EXTRA: An exact first-order algorithm for decentralized consensus optimization, SIAM Journal on Optimization, 25, 2, pp. 944-966, (2015)
[12]  
XU J, REM S, SOH Y C, Et al., Augmented distributed gradient methods for multi-agent optimization under uncoordinated constant step-sizes, The 54th IEEE Conference on Decision and Control, pp. 2055-2060, (2015)
[13]  
NEDIC A, OLSHEVSKY A, SHI W, Et al., Geometrically convergent distributed optimization with uncoordinated step-sizes, American Control Conference, pp. 3950-3955, (2017)
[14]  
QU G, LI N., Harnessing smoothness to accelerate distributed optimization, IEEE Transactions on Control of Network Systems, 5, 3, pp. 1245-1260, (2018)
[15]  
BERAHAS A S, BOLLAPRAGADA R, KESKAR N S, Et al., Balancing communication and computation in distributed optimization, IEEE Transactions on Automatic Control, 64, 8, pp. 3141-3155, (2019)
[16]  
LI H, LIN Z C., Revisiting EXTRA for smooth distributed optimization, SIAM Journal on Optimization, 30, 3, pp. 1795-1821, (2020)
[17]  
QU G, LI N., Accelerated distributed Nesterov gradient descent, IEEE Transactions on Automatic Control, 65, 6, pp. 2566-2581, (2020)
[18]  
LI H, FANG C, YIN W, Et al., Decentralized accelerated gradient methods with increasing penalty parameters, IEEE Transactions on Signal Processing, 68, pp. 4855-4870, (2020)
[19]  
ZHU M, MARTINEZ S., Discrete-time dynamic average consensus, Automatica, 46, 2, pp. 322-329, (2010)
[20]  
NEDIC A, OLSHEVSKY A., Distributed optimization of strongly convex functions on directed time-varying graphs, Global Conference on Signal and Information Processing, pp. 329-332, (2013)