Distributed Stochastic Subgradient Projection Algorithms for Convex Optimization

被引:581
|
作者
Ram, S. Sundhar [2 ]
Nedic, A. [1 ]
Veeravalli, V. V. [2 ]
机构
[1] Univ Illinois, Ind & Enterprise Syst Engn Dept, Urbana, IL 61801 USA
[2] Univ Illinois, Dept Elect & Comp Engn, Urbana, IL 61801 USA
关键词
Distributed algorithm; Convex optimization; Subgradient methods; Stochastic approximation; CONSENSUS; CONVERGENCE;
D O I
10.1007/s10957-010-9737-7
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
We consider a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradient information (known with stochastic errors) of its own function and by projecting onto the constraint set. The goal of this paper is to explore the effects of stochastic subgradient errors on the convergence of the algorithm. We first consider the behavior of the algorithm in mean, and then the convergence with probability 1 and in mean square. We consider general stochastic errors that have uniformly bounded second moments and obtain bounds on the limiting performance of the algorithm in mean for diminishing and non-diminishing stepsizes. When the means of the errors diminish, we prove that there is mean consensus between the agents and mean convergence to the optimum function value for diminishing stepsizes. When the mean errors diminish sufficiently fast, we strengthen the results to consensus and convergence of the iterates to an optimal solution with probability 1 and in mean square.
引用
收藏
页码:516 / 545
页数:30
相关论文
共 50 条
  • [41] Online distributed stochastic learning algorithm for convex optimization in time-varying directed networks
    Li, Jueyou
    Gu, Chuanye
    Wu, Zhiyou
    NEUROCOMPUTING, 2020, 416 (416) : 85 - 94
  • [42] Quantized Distributed Online Projection-Free Convex Optimization
    Zhang, Wentao
    Shi, Yang
    Zhang, Baoyong
    Lu, Kaihong
    Yuan, Deming
    IEEE CONTROL SYSTEMS LETTERS, 2023, 7 : 1837 - 1842
  • [43] Distributed Subgradient Method for Constrained Convex Optimization with Quantized and Event-Triggered Communication
    Hayashi, Naoki
    Ishikawa, Kazuyuki
    Takai, Shigemasa
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2020, E103A (02) : 428 - 434
  • [44] Distributed Event-Triggered Subgradient Method for Convex Optimization with a Common Constraint Set
    Kajiyama, Yuichi
    Hayashi, Naoki
    Takai, Shigemasa
    IFAC PAPERSONLINE, 2017, 50 (01): : 15319 - 15324
  • [45] A Distributed Subgradient Method for Dynamic Convex Optimization Problems Under Noisy Information Exchange
    Cavalcante, Renato L. G.
    Stanczak, Slawomir
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2013, 7 (02) : 243 - 256
  • [46] Distributed Saddle-Point Subgradient Algorithms With Laplacian Averaging
    Mateos-Nunez, David
    Cortes, Jorge
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2017, 62 (06) : 2720 - 2735
  • [47] Distributed Subgradient Methods for Multi-Agent Optimization
    Nedic, Angelia
    Ozdaglar, Asurrian
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2009, 54 (01) : 48 - 61
  • [48] ON CONVERGENCE RATE OF DISTRIBUTED STOCHASTIC GRADIENT ALGORITHM FOR CONVEX OPTIMIZATION WITH INEQUALITY CONSTRAINTS
    Yuan, Deming
    Ho, Daniel W. C.
    Hong, Yiguang
    SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2016, 54 (05) : 2872 - 2892
  • [49] "EFFICIENT" SUBGRADIENT METHODS FOR GENERAL CONVEX OPTIMIZATION
    Renegar, James
    SIAM JOURNAL ON OPTIMIZATION, 2016, 26 (04) : 2649 - 2676
  • [50] A subgradient projection method for quasiconvex multiobjetive optimization
    Lara, F.
    Marcavillaca, R. T.
    Thang, T. V.
    OPTIMIZATION, 2024,