Optimal distributed stochastic mirror descent for strongly convex optimization

被引:110
作者
Yuan, Deming [1 ,2 ]
Hong, Yiguang [3 ]
Ho, Daniel W. C. [4 ]
Jiang, Guoping [2 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Automat, Nanjing 210094, Jiangsu, Peoples R China
[2] Nanjing Univ Posts & Telecommun, Coll Automat, Nanjing 210023, Jiangsu, Peoples R China
[3] Chinese Acad Sci, Acad Math & Syst Sci, Key Lab Syst & Control, Beijing 100190, Peoples R China
[4] City Univ Hong Kong, Dept Math, Kowloon, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Distributed stochastic optimization; Strong convexity; Non-Euclidean divergence; Mirror descent; Epoch gradient descent; Optimal convergence rate; VARYING DIRECTED-GRAPHS; SUBGRADIENT METHODS; ALGORITHMS; CONSENSUS; CONSTRAINTS; NETWORKS;
D O I
10.1016/j.automatica.2017.12.053
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper we consider convergence rate problems for stochastic strongly-convex optimization in the non-Euclidean sense with a constraint set over a time-varying multi-agent network. We propose two efficient non-Euclidean stochastic subgradient descent algorithms based on the Bregman divergence as distance-measuring function rather than the Euclidean distances that were employed by the standard distributed stochastic projected subgradient algorithms. For distributed optimization of non-smooth and strongly convex functions whose only stochastic subgradients are available, the first algorithm recovers the best previous known rate of O(ln(T)/T) (where T is the total number of iterations). The second algorithm is an epoch variant of the first algorithm that attains the optimal convergence rate of O(1/T), matching that of the best previously known centralized stochastic subgradient algorithm. Finally, we report some simulation results to illustrate the proposed algorithms. (C) 2018 Elsevier Ltd. All rights reserved.
引用
收藏
页码:196 / 203
页数:8
相关论文
共 25 条
[1]   Mirror descent and nonlinear projected subgradient methods for convex optimization [J].
Beck, A ;
Teboulle, M .
OPERATIONS RESEARCH LETTERS, 2003, 31 (03) :167-175
[2]   Distributed Constrained Optimization by Consensus-Based Primal-Dual Perturbation Method [J].
Chang, Tsung-Hui ;
Nedic, Angelia ;
Scaglione, Anna .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2014, 59 (06) :1524-1538
[3]   Diffusion Adaptation Strategies for Distributed Optimization and Learning Over Networks [J].
Chen, Jianshu ;
Sayed, Ali H. .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2012, 60 (08) :4289-4305
[4]  
Hazan E, 2014, J MACH LEARN RES, V15, P2489
[5]   Distributed convex optimization via continuous-time coordination algorithms with discrete-time communication [J].
Kia, Solmaz S. ;
Cortes, Jorge ;
Martinez, Sonia .
AUTOMATICA, 2015, 55 :254-264
[6]  
Lan G., 2017, PREPRINT
[7]   Distributed multi-agent optimization subject to nonidentical constraints and communication delays [J].
Lin, Peng ;
Ren, Wei ;
Song, Yongduan .
AUTOMATICA, 2016, 65 :120-131
[8]  
Liu S., 2014, IFAC P, V47, P9762
[9]   Gossip Algorithms for Convex Consensus Optimization Over Networks [J].
Lu, Jie ;
Tang, Choon Yik ;
Regier, Paul R. ;
Bow, Travis D. .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2011, 56 (12) :2911-2918
[10]   Stochastic Gradient-Push for Strongly Convex Functions on Time-Varying Directed Graphs [J].
Nedic, Angelia ;
Olshevsky, Alex .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2016, 61 (12) :3936-3947