Optimal convergence rates of totally asynchronous optimization

被引:1
作者
Wu, Xuyang [1 ]
Magnusson, Sindri [2 ]
Feyzmahdavian, Hamid Reza [3 ]
Johansson, Mikael [1 ]
机构
[1] KTH Royal Inst Technol, Sch EECS, Div Decis & Control Syst, SE-10044 Stockholm, Sweden
[2] Stockholm Univ, Dept Comp & Syst Sci, SE-16407 Stockholm, Sweden
[3] ABB Cooperate Res, SE-72178 Vasteras, Sweden
来源
2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC) | 2022年
基金
瑞典研究理事会;
关键词
D O I
10.1109/CDC51059.2022.9993168
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Asynchronous optimization algorithms are at the core of modern machine learning and resource allocation systems. However, most convergence results consider bounded information delays and several important algorithms lack guarantees when they operate under total asynchrony. In this paper, we derive explicit convergence rates for the proximal incremental aggregated gradient (PIAG) and the asynchronous block-coordinate descent (Async-BCD) methods under a specific model of total asynchrony, and show that the derived rates are order-optimal. The convergence bounds provide an insightful understanding of how the growth rate of the delays deteriorates the convergence times of the algorithms. Our theoretical findings are demonstrated by a numerical example.
引用
收藏
页码:6484 / 6490
页数:7
相关论文
共 29 条
[1]  
[Anonymous], 2016, ARXIV160400526
[2]   Advances in Asynchronous Parallel and Distributed Optimization [J].
Assran, By Mahmoud ;
Aytekin, Arda ;
Feyzmahdavian, Hamid Reza ;
Johansson, Mikael ;
Rabbat, Michael G. .
PROCEEDINGS OF THE IEEE, 2020, 108 (11) :2013-2031
[3]  
Aytekin A., 2016, ARXIV161005507
[4]  
Bertsekas D. P., 1989, P 3 INT C SUP, P461
[5]   DISTRIBUTED ASYNCHRONOUS COMPUTATION OF FIXED-POINTS [J].
BERTSEKAS, DP .
MATHEMATICAL PROGRAMMING, 1983, 27 (01) :107-120
[6]  
Bertsekas DP, 2003, Parallel and Distributed Computation: Numerical Methods, V2nd
[7]   Global μ-stability of delayed neural networks with unbounded time-varying delays [J].
Chen, Tianping ;
Wang, Lili .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2007, 18 (06) :1836-1840
[8]   Optimal complexity and certification of Bregman first-order methods [J].
Dragomir, Radu-Alexandru ;
Taylor, Adrien B. ;
D'Aspremont, Alexandre ;
Bolte, Jerome .
MATHEMATICAL PROGRAMMING, 2022, 194 (1-2) :41-83
[9]   Performance of first-order methods for smooth convex minimization: a novel approach [J].
Drori, Yoel ;
Teboulle, Marc .
MATHEMATICAL PROGRAMMING, 2014, 145 (1-2) :451-482
[10]  
Feyzmahdavian H. R., 2021, ARXIV210904522