Random gradient-free method for online distributed optimization with strongly pseudoconvex cost functions

被引:0
|
作者
Yan, Xiaoxi [1 ]
Li, Cheng [1 ]
Lu, Kaihong [2 ]
Xu, Hang [2 ]
机构
[1] Jiangsu Univ, Sch Elect & Informat Engn, Zhenjiang 212013, Jiangsu, Peoples R China
[2] Shandong Univ Sci & Technol, Coll Elect Engn & Automat, Qingdao 266590, Shandong, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Multi-agent system; Online distributed optimization; Pseudoconvex optimization; Random gradient-free method; PSEUDOMONOTONE VARIATIONAL-INEQUALITIES; MIXED EQUILIBRIUM PROBLEMS; CONVEX-OPTIMIZATION; MULTIAGENT OPTIMIZATION; ALGORITHMS;
D O I
10.1007/s11768-023-00181-8
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper focuses on the online distributed optimization problem based on multi-agent systems. In this problem, each agent can only access its own cost function and a convex set, and can only exchange local state information with its current neighbors through a time-varying digraph. In addition, the agents do not have access to the information about the current cost functions until decisions are made. Different from most existing works on online distributed optimization, here we consider the case where the cost functions are strongly pseudoconvex and real gradients of the cost functions are not available. To handle this problem, a random gradient-free online distributed algorithm involving the multi-point gradient estimator is proposed. Of particular interest is that under the proposed algorithm, each agent only uses the estimation information of gradients instead of the real gradient information to make decisions. The dynamic regret is employed to measure the proposed algorithm. We prove that if the cumulative deviation of the minimizer sequence grows within a certain rate, then the expectation of dynamic regret increases sublinearly. Finally, a simulation example is given to corroborate the validity of our results.
引用
收藏
页码:14 / 24
页数:11
相关论文
共 50 条
  • [21] Distributed Randomized Gradient-Free Mirror Descent Algorithm for Constrained Optimization
    Yu, Zhan
    Ho, Daniel W. C.
    Yuan, Deming
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2022, 67 (02) : 957 - 964
  • [22] Gradient-free method for distributed multi-agent optimization via push-sum algorithms
    Yuan, Deming
    Xu, Shengyuan
    Lu, Junwei
    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2015, 25 (10) : 1569 - 1580
  • [23] Distributed online bandit optimization under random quantization
    Yuan, Deming
    Zhang, Baoyong
    Ho, Daniel W. C.
    Zheng, Wei Xing
    Xu, Shengyuan
    AUTOMATICA, 2022, 146
  • [24] Gradient-free strategies to robust well control optimization
    Pinto, Jefferson Wellano Oliveira
    Tueros, Juan Alberto Rojas
    Horowitz, Bernardo
    da Silva, Silvana Maria Bastos Afonso
    Willmersdorf, Ramiro Brito
    de Oliveira, Diego Felipe Barbosa
    COMPUTATIONAL GEOSCIENCES, 2020, 24 (06) : 1959 - 1978
  • [25] Random Gradient-Free Optimization for Multiagent Systems With Communication Noises Under a Time-Varying Weight Balanced Digraph
    Wang, Dong
    Zhou, Jun
    Wang, Zehua
    Wang, Wei
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2020, 50 (01): : 281 - 289
  • [26] Randomized Gradient-Free Method for Multiagent Optimization Over Time-Varying Networks
    Yuan, Deming
    Ho, Daniel W. C.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2015, 26 (06) : 1342 - 1347
  • [27] Gossip-Based Gradient-Free Method for Multi-Agent Optimization: Constant Step Size Analysis
    Yuan Deming
    2014 33RD CHINESE CONTROL CONFERENCE (CCC), 2014, : 1349 - 1353
  • [28] Distributed Randomized Gradient-Free Convex Optimization With Set Constraints Over Time-Varying Weight-Unbalanced Digraphs
    Zhu, Yanan
    Li, Qinghai
    Li, Tao
    Wen, Guanghui
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2025, 12 (02): : 610 - 622
  • [29] Gradient-Free Methods for Deterministic and Stochastic Nonsmooth Nonconvex Optimization
    Lin, Tianyi
    Zheng, Zeyu
    Jordan, Michael I.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [30] Randomized Gradient-Free Distributed Algorithms through Sequential Gaussian Smoothing
    Chen, Xing-Min
    Gao, Chao
    Zhang, Ming-Kun
    Qin, Yi-Da
    PROCEEDINGS OF THE 36TH CHINESE CONTROL CONFERENCE (CCC 2017), 2017, : 8407 - 8412