ON CONVERGENCE RATE OF DISTRIBUTED STOCHASTIC GRADIENT ALGORITHM FOR CONVEX OPTIMIZATION WITH INEQUALITY CONSTRAINTS

被引:51
|
作者
Yuan, Deming [1 ]
Ho, Daniel W. C. [2 ]
Hong, Yiguang [3 ]
机构
[1] Nanjing Univ Posts & Telecommun, Coll Automat, Nanjing 210023, Jiangsu, Peoples R China
[2] City Univ Hong Kong, Dept Math, Kowloon, Hong Kong, Peoples R China
[3] Chinese Acad Sci, Acad Math & Syst Sci, Key Lab Syst & Control, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
distributed convex optimization; constrained optimization algorithm; stochastic gradient; convergence rate; SUBGRADIENT METHODS; CONSENSUS;
D O I
10.1137/15M1048896
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we consider an optimization problem, where multiple agents cooperate to minimize the sum of their local individual objective functions subject to a global inequality constraint. We propose a class of distributed stochastic gradient algorithms that solve the problem using only local computation and communication. The implementation of the algorithms removes the need for performing the intermediate projections. For strongly convex optimization, we employ a smoothed constraint incorporation technique to show that the algorithm converges at an expected rate of O(In T/T) (where T is the number of iterations) with bounded gradients. For non-strongly convex optimization, we use a reduction technique to establish an O(1/root T) convergence rate in expectation. Finally, a numerical example is provided to show the convergence of the proposed algorithms.
引用
收藏
页码:2872 / 2892
页数:21
相关论文
共 50 条
  • [41] Distributed Nonlinear Programming Methods for Optimization Problems with Inequality Constraints
    Matei, Ion
    Baras, John S.
    2015 54TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2015, : 2649 - 2654
  • [42] Distributed Continuous-Time Optimization: Nonuniform Gradient Gains, Finite-Time Convergence, and Convex Constraint Set
    Lin, Peng
    Ren, Wei
    Farrell, Jay A.
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2017, 62 (05) : 2239 - 2253
  • [43] A stochastic averaging gradient algorithm with multi-step communication for distributed optimization
    Zheng, Zuqing
    Yan, Yu
    Feng, Liping
    Du, Zhenyuan
    Li, Huaqing
    Wang, Zheng
    Hu, Jinhui
    OPTIMAL CONTROL APPLICATIONS & METHODS, 2023, 44 (04) : 2208 - 2226
  • [44] A distributed stochastic optimization algorithm with gradient-tracking and distributed heavy-ball acceleration
    Sun, Bihao
    Hu, Jinhui
    Xia, Dawen
    Li, Huaqing
    FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING, 2021, 22 (11) : 1463 - 1476
  • [45] DISTRIBUTED OPTIMIZATION BASED ON GRADIENT TRACKING REVISITED: ENHANCING CONVERGENCE RATE VIA SURROGATION
    Sun, Ying
    Scutari, Gesualdo
    Daneshmand, Amir
    SIAM JOURNAL ON OPTIMIZATION, 2022, 32 (02) : 354 - 385
  • [46] Distributed Discrete-Time Convex Optimization With Closed Convex Set Constraints: Linearly Convergent Algorithm Design
    Luan, Meng
    Wen, Guanghui
    Liu, Hongzhe
    Huang, Tingwen
    Chen, Guanrong
    Yu, Wenwu
    IEEE TRANSACTIONS ON CYBERNETICS, 2024, 54 (04) : 2271 - 2283
  • [47] Shuffling Momentum Gradient Algorithm for Convex Optimization
    Tran, Trang H.
    Tran-Dinh, Quoc
    Nguyen, Lam M.
    VIETNAM JOURNAL OF MATHEMATICS, 2024,
  • [48] BLOCK STOCHASTIC GRADIENT ITERATION FOR CONVEX AND NONCONVEX OPTIMIZATION
    Xu, Yangyang
    Yin, Wotao
    SIAM JOURNAL ON OPTIMIZATION, 2015, 25 (03) : 1686 - 1716
  • [49] Distributed neurodynamic approaches to nonsmooth optimization problems with inequality and set constraints
    Luan, Linhua
    Wen, Xingnan
    Qin, Sitian
    COMPLEX & INTELLIGENT SYSTEMS, 2022, 8 (06) : 5511 - 5530
  • [50] Exponential convergence of distributed primal-dual convex optimization algorithm without strong convexity
    Liang, Shu
    Wang, Le Yi
    Yin, George
    AUTOMATICA, 2019, 105 : 298 - 306