Triggered Gradient Tracking for asynchronous distributed optimization

被引:12
作者
Carnevale, Guido [1 ]
Notarnicola, Ivano [1 ]
Marconi, Lorenzo [1 ]
Notarstefano, Giuseppe [1 ]
机构
[1] Alma Mater Studiorum Univ Bologna, Dept Elect Elect & Informat Engn, Bologna, BO, Italy
基金
欧洲研究理事会;
关键词
Distributed optimization; Multi-agent systems; Large scale optimization problems and  methods; CONVEX-OPTIMIZATION; CONSENSUS; CONVERGENCE; ALGORITHMS;
D O I
10.1016/j.automatica.2022.110726
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper proposes ASYNCHRONOUS TRIGGERED GRADIENT TRACKING, i.e., a distributed optimization algorithm to solve consensus optimization over networks with asynchronous communication. As a building block, we devise the continuous-time counterpart of the recently proposed (discrete-time) distributed gradient tracking called CONTINUOUS GRADIENT TRACKING. By using a Lyapunov approach, we prove exponential stability of the equilibrium corresponding to agents' estimates being consensual to the optimal solution, with arbitrary initialization of the local estimates. Then, we propose two triggered versions of the algorithm. In the first one, the agents continuously integrate their local dynamics and exchange with neighbors their current local variables in a synchronous way. In ASYNCHRONOUS TRIGGERED GRADIENT TRACKING, we propose a totally asynchronous scheme in which each agent sends to neighbors its current local variables based on a triggering condition that depends on a locally verifiable condition. The triggering protocol preserves the linear convergence of the algorithm and avoids the Zeno behavior, i.e., an infinite number of triggering events over a finite interval of time is excluded. By using the stability analysis of CONTINUOUS GRADIENT TRACKING as a preparatory result, we show exponential stability of the equilibrium point holds for both triggered algorithms and any estimate initialization. Finally, the simulations validate the effectiveness of the proposed methods on a data analytics problem, showing also improved performance in terms of inter-agent communication. (c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:12
相关论文
共 42 条
  • [21] ACHIEVING GEOMETRIC CONVERGENCE FOR DISTRIBUTED OPTIMIZATION OVER TIME-VARYING GRAPHS
    Nedic, Angelia
    Olshevsky, Alex
    Shi, Wei
    [J]. SIAM JOURNAL ON OPTIMIZATION, 2017, 27 (04) : 2597 - 2633
  • [22] Constrained Consensus and Optimization in Multi-Agent Networks
    Nedic, Angelia
    Ozdaglar, Asuman
    Parrilo, Pablo A.
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2010, 55 (04) : 922 - 938
  • [23] Distributed Subgradient Methods for Multi-Agent Optimization
    Nedic, Angelia
    Ozdaglar, Asurrian
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2009, 54 (01) : 48 - 61
  • [24] Distributed Optimization for Smart Cyber-Physical Networks
    Notarstefano, Giuseppe
    Notarnicola, Ivano
    Camisa, Andrea
    [J]. FOUNDATIONS AND TRENDS IN SYSTEMS AND CONTROL, 2019, 7 (03): : 253 - 383
  • [25] Push-Pull Gradient Methods for Distributed Optimization in Networks
    Pu, Shi
    Shi, Wei
    Xu, Jinming
    Nedic, Angelia
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2021, 66 (01) : 1 - 16
  • [26] Harnessing Smoothness to Accelerate Distributed Optimization
    Qu, Guannan
    Li, Na
    [J]. IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, 2018, 5 (03): : 1245 - 1260
  • [27] Ran Xin, 2018, IEEE Control Systems Letters, V2, P313, DOI 10.1109/LCSYS.2018.2834316
  • [28] Distributed nonconvex constrained optimization over time-varying digraphs
    Scutari, Gesualdo
    Sun, Ying
    [J]. MATHEMATICAL PROGRAMMING, 2019, 176 (1-2) : 497 - 544
  • [29] Understanding the acceleration phenomenon via high-resolution differential equations
    Shi, Bin
    Du, Simon S.
    Jordan, Michael, I
    Su, Weijie J.
    [J]. MATHEMATICAL PROGRAMMING, 2022, 195 (1-2) : 79 - 148
  • [30] Sontag ED, 2008, LECT NOTES MATH, V1932, P163