Stochastic mirror descent method for distributed multi-agent optimization

被引:21
作者
Li, Jueyou [1 ,2 ]
Li, Guoquan [1 ]
Wu, Zhiyou [1 ]
Wu, Changzhi [3 ]
机构
[1] Chongqing Normal Univ, Sch Math Sci, Chongqing 400047, Peoples R China
[2] Univ Sydney, Sch Elect & Informat Engn, Sydney, NSW 2006, Australia
[3] Curtin Univ, Sch Built Environm, Bentley, WA 6102, Australia
关键词
Distributed algorithm; Multi-agent network; Mirror descent; Stochastic approximation; Convex optimization; GRADIENT-FREE METHOD; CONVEX-OPTIMIZATION; SUBGRADIENT METHODS; ALGORITHMS; CONSENSUS; NETWORKS;
D O I
10.1007/s11590-016-1071-z
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
This paper considers a distributed optimization problem encountered in a time-varying multi-agent network, where each agent has local access to its convex objective function, and cooperatively minimizes a sum of convex objective functions of the agents over the network. Based on the mirror descent method, we develop a distributed algorithm by utilizing the subgradient information with stochastic errors. We firstly analyze the effects of stochastic errors on the convergence of the algorithm and then provide an explicit bound on the convergence rate as a function of the error bound and number of iterations. Our results show that the algorithm asymptotically converges to the optimal value of the problem within an error level, when there are stochastic errors in the subgradient evaluations. The proposed algorithm can be viewed as a generalization of the distributed subgradient projection methods since it utilizes more general Bregman divergence instead of the Euclidean squared distance. Finally, some simulation results on a regularized hinge regression problem are presented to illustrate the effectiveness of the algorithm.
引用
收藏
页码:1179 / 1197
页数:19
相关论文
共 50 条
[31]   Distributed Output Optimization for Discrete-time Linear Multi-agent Systems [J].
Tang, Yutao ;
Zhu, Hao ;
Lv, Xiaoyong .
PROCEEDINGS OF THE 38TH CHINESE CONTROL CONFERENCE (CCC), 2019, :5665-5669
[32]   Distributed Convex Optimization Consensus in Multi-Agent Network Subject to Equality Constraints [J].
Zhao, Daduan ;
Dong, Tao ;
Li, XiaoLi ;
Li, Yan .
PROCEEDINGS OF 2018 IEEE 7TH DATA DRIVEN CONTROL AND LEARNING SYSTEMS CONFERENCE (DDCLS), 2018, :1017-1021
[33]   Distributed continuous-time optimization in multi-agent networks with undirected topology [J].
Fu, Zao ;
Zhao, You ;
Wen, Guanghui .
2019 IEEE 15TH INTERNATIONAL CONFERENCE ON CONTROL AND AUTOMATION (ICCA), 2019, :1044-1049
[34]   Distributed mirror descent method with operator extrapolation for stochastic aggregative games [J].
Wang, Tongyu ;
Yi, Peng ;
Chen, Jie .
AUTOMATICA, 2024, 159
[35]   Gossip-Based Gradient-Free Method for Multi-Agent Optimization: Constant Step Size Analysis [J].
Yuan Deming .
2014 33RD CHINESE CONTROL CONFERENCE (CCC), 2014, :1349-1353
[36]   A modified distributed optimization method for both continuous-time and discrete-time multi-agent systems [J].
Wang, Dong ;
Wang, Wei ;
Liu, Yurong ;
Alsaadi, Fuad E. .
NEUROCOMPUTING, 2018, 275 :725-732
[37]   Distributed Optimization of Nonlinear Multi-agent Systems with Disturbance Rejection [J].
Zhou, Xueqian ;
Su, Youfeng .
2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, :5387-5391
[38]   Distributed multi-agent optimization with state-dependent communication [J].
Lobel, Ilan ;
Ozdaglar, Asuman ;
Feijer, Diego .
MATHEMATICAL PROGRAMMING, 2011, 129 (02) :255-284
[39]   Distributed optimization for uncertain nonlinear interconnected multi-agent systems [J].
An, Baizheng ;
Huang, Bomin ;
Zou, Yao ;
Chen, Fei ;
Meng, Ziyang .
SYSTEMS & CONTROL LETTERS, 2022, 168
[40]   Accelerated Multi-Agent Optimization Method over Stochastic Networks [J].
Ananduta, Wicak ;
Ocampo-Martinez, Carlos ;
Nedic, Angelia .
2020 59TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2020, :2961-2966