Random gradient-free method for online distributed optimization with strongly pseudoconvex cost functions

被引:0
|
作者
Yan, Xiaoxi [1 ]
Li, Cheng [1 ]
Lu, Kaihong [2 ]
Xu, Hang [2 ]
机构
[1] Jiangsu Univ, Sch Elect & Informat Engn, Zhenjiang 212013, Jiangsu, Peoples R China
[2] Shandong Univ Sci & Technol, Coll Elect Engn & Automat, Qingdao 266590, Shandong, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Multi-agent system; Online distributed optimization; Pseudoconvex optimization; Random gradient-free method; PSEUDOMONOTONE VARIATIONAL-INEQUALITIES; MIXED EQUILIBRIUM PROBLEMS; CONVEX-OPTIMIZATION; MULTIAGENT OPTIMIZATION; ALGORITHMS;
D O I
10.1007/s11768-023-00181-8
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper focuses on the online distributed optimization problem based on multi-agent systems. In this problem, each agent can only access its own cost function and a convex set, and can only exchange local state information with its current neighbors through a time-varying digraph. In addition, the agents do not have access to the information about the current cost functions until decisions are made. Different from most existing works on online distributed optimization, here we consider the case where the cost functions are strongly pseudoconvex and real gradients of the cost functions are not available. To handle this problem, a random gradient-free online distributed algorithm involving the multi-point gradient estimator is proposed. Of particular interest is that under the proposed algorithm, each agent only uses the estimation information of gradients instead of the real gradient information to make decisions. The dynamic regret is employed to measure the proposed algorithm. We prove that if the cumulative deviation of the minimizer sequence grows within a certain rate, then the expectation of dynamic regret increases sublinearly. Finally, a simulation example is given to corroborate the validity of our results.
引用
收藏
页码:14 / 24
页数:11
相关论文
共 50 条
  • [41] Decentralized Online Strongly Convex Optimization with General Compressors and Random Disturbances
    Liu, Honglei
    Yuan, Deming
    Zhang, Baoyong
    JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2025, 204 (01)
  • [42] Gradient-Free Accelerated Event-Triggered Scheme for Constrained Network Optimization in Smart Grids
    Hu, Chuanhao
    Zhang, Xuan
    Wu, Qiuwei
    IEEE TRANSACTIONS ON SMART GRID, 2024, 15 (03) : 2843 - 2855
  • [43] Approaching Quartic Convergence Rates for Quasi-Stochastic Approximation with Application to Gradient-Free Optimization
    Lauand, Caio Kalil
    Meyn, Sean
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [44] A Distributed Conjugate Gradient Online Learning Method over Networks
    Xu, Cuixia
    Zhu, Junlong
    Shang, Youlin
    Wu, Qingtao
    COMPLEXITY, 2020, 2020
  • [45] Online Distributed Stochastic Gradient Algorithm for Nonconvex Optimization With Compressed Communication
    Li, Jueyou
    Li, Chaojie
    Fan, Jing
    Huang, Tingwen
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2024, 69 (02) : 936 - 951
  • [46] Distributed Unconstrained Optimization with Time-varying Cost Functions
    Esteki, Amir-Salar
    Kia, Solmaz S.
    2023 EUROPEAN CONTROL CONFERENCE, ECC, 2023,
  • [47] Distributed Optimization of Strongly Convex Functions on Directed Time-Varying Graphs
    Nedic, Angelia
    Olshevsky, Alex
    2013 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP), 2013, : 329 - 332
  • [48] An enhanced gradient-tracking bound for distributed online stochastic convex optimization
    Alghunaim, Sulaiman A.
    Yuan, Kun
    SIGNAL PROCESSING, 2024, 217
  • [49] Distributed Continuous-Time Optimization With Uncertain Time-Varying Quadratic Cost Functions
    Jiang, Liangze
    Wu, Zheng-Guang
    Wang, Lei
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2025, 55 (02): : 1526 - 1536
  • [50] Analysis of a Two-Step Gradient Method with Two Momentum Parameters for Strongly Convex Unconstrained Optimization
    Krivovichev, Gerasim V.
    Sergeeva, Valentina Yu.
    ALGORITHMS, 2024, 17 (03)