ON CONVERGENCE RATE OF DISTRIBUTED STOCHASTIC GRADIENT ALGORITHM FOR CONVEX OPTIMIZATION WITH INEQUALITY CONSTRAINTS

被引:51
|
作者
Yuan, Deming [1 ]
Ho, Daniel W. C. [2 ]
Hong, Yiguang [3 ]
机构
[1] Nanjing Univ Posts & Telecommun, Coll Automat, Nanjing 210023, Jiangsu, Peoples R China
[2] City Univ Hong Kong, Dept Math, Kowloon, Hong Kong, Peoples R China
[3] Chinese Acad Sci, Acad Math & Syst Sci, Key Lab Syst & Control, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
distributed convex optimization; constrained optimization algorithm; stochastic gradient; convergence rate; SUBGRADIENT METHODS; CONSENSUS;
D O I
10.1137/15M1048896
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we consider an optimization problem, where multiple agents cooperate to minimize the sum of their local individual objective functions subject to a global inequality constraint. We propose a class of distributed stochastic gradient algorithms that solve the problem using only local computation and communication. The implementation of the algorithms removes the need for performing the intermediate projections. For strongly convex optimization, we employ a smoothed constraint incorporation technique to show that the algorithm converges at an expected rate of O(In T/T) (where T is the number of iterations) with bounded gradients. For non-strongly convex optimization, we use a reduction technique to establish an O(1/root T) convergence rate in expectation. Finally, a numerical example is provided to show the convergence of the proposed algorithms.
引用
收藏
页码:2872 / 2892
页数:21
相关论文
共 50 条
  • [1] DISTRIBUTED PROXIMAL-GRADIENT METHOD FOR CONVEX OPTIMIZATION WITH INEQUALITY CONSTRAINTS
    Li, Jueyou
    Wu, Changzhi
    Wu, Zhiyou
    Long, Qiang
    Wang, Xiangyu
    ANZIAM JOURNAL, 2014, 56 (02) : 160 - 178
  • [2] Stochastic Strongly Convex Optimization via Distributed Epoch Stochastic Gradient Algorithm
    Yuan, Deming
    Ho, Daniel W. C.
    Xu, Shengyuan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (06) : 2344 - 2357
  • [3] A fixed-time gradient algorithm for distributed optimization with inequality constraints
    He, Xing
    Wei, Boyu
    Wang, Hui
    NEUROCOMPUTING, 2023, 532 : 106 - 113
  • [4] Convergence of a Multi-Agent Projected Stochastic Gradient Algorithm for Non-Convex Optimization
    Bianchi, Pascal
    Jakubowicz, Jeremie
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2013, 58 (02) : 391 - 405
  • [5] Edge-Based Stochastic Gradient Algorithm for Distributed Optimization
    Wang, Zheng
    Li, Huaqing
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2020, 7 (03): : 1421 - 1430
  • [7] On Distributed Convex Optimization Under Inequality and Equality Constraints
    Zhu, Minghui
    Martinez, Sonia
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2012, 57 (01) : 151 - 164
  • [8] Distributed Adaptive Gradient Algorithm With Gradient Tracking for Stochastic Nonconvex Optimization
    Han, Dongyu
    Liu, Kun
    Lin, Yeming
    Xia, Yuanqing
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2024, 69 (09) : 6333 - 6340
  • [9] Distributed Stochastic Gradient Tracking Algorithm With Variance Reduction for Non-Convex Optimization
    Jiang, Xia
    Zeng, Xianlin
    Sun, Jian
    Chen, Jie
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (09) : 5310 - 5321
  • [10] Distributed Continuous-Time Nonsmooth Convex Optimization With Coupled Inequality Constraints
    Li, Xiuxian
    Xie, Lihua
    Hong, Yiguang
    IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, 2020, 7 (01): : 74 - 84