A Flexible Stochastic Multi-Agent ADMM Method for Large-Scale Distributed Optimization

被引:1
作者
Wu, Lin [1 ,2 ]
Wang, Yongbin [1 ,2 ]
Shi, Tuo [3 ,4 ]
机构
[1] Minist Educ, Key Lab Convergent Media & Intelligent Technol, Beijing 100024, Peoples R China
[2] Commun Univ China, Sch Comp & Cyberspace Secur, Beijing 100024, Peoples R China
[3] Beijing Police Coll, Beijing 102202, Peoples R China
[4] Inst Sci & Tech Informat China, Beijing 100038, Peoples R China
关键词
Distributed optimization; ADMM; variance reduction; Hessian approximation; flexibility; ALTERNATING DIRECTION METHOD; CONVERGENCE;
D O I
10.1109/ACCESS.2021.3120017
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
While applying stochastic alternating direction method of multiplier (ADMM) methods has become enormously potential in distributed applications, improving the algorithmic flexibility can bring huge benefits. In this paper, we propose a novel stochastic optimization method based on distributed ADMM method, called Flex-SADMM. Specifically, we incorporate the variance reduced first-order information and the approximated second-order information for solving the subproblem of ADMM, which targets at the stable convergence and improving the accuracy of the search direction. Moreover, different from most ADMM based methods that require each computation node to perform the update in each iteration, we only require each computation node updates within a bounded iteration interval, this has significantly improved the flexibility. We further provide the theoretical results to guarantee the convergence of Flex-SADMM in the nonconvex optimization problems. These results show that our proposed method can successfully overcome the above challenges while the computational complexity is maintained low. In the empirical study, we have verified the effectiveness and the improved flexibility of our proposed method.
引用
收藏
页码:19045 / 19059
页数:15
相关论文
共 50 条
  • [31] Distributed optimization with hybrid linear constraints for multi-agent networks
    Zheng, Yanling
    Liu, Qingshan
    Wang, Miao
    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2022, 32 (04) : 2069 - 2083
  • [32] Distributed Subgradient Algorithm for Multi-Agent Optimization With Dynamic Stepsize
    Ren, Xiaoxing
    Li, Dewei
    Xi, Yugeng
    Shao, Haibin
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2021, 8 (08) : 1451 - 1464
  • [33] Distributed policy evaluation via inexact ADMM in multi-agent reinforcement learning
    Xiaoxiao Zhao
    Peng Yi
    Li Li
    Control Theory and Technology, 2020, 18 : 362 - 378
  • [34] Distributed policy evaluation via inexact ADMM in multi-agent reinforcement learning
    Zhao, Xiaoxiao
    Yi, Peng
    Li, Li
    CONTROL THEORY AND TECHNOLOGY, 2020, 18 (04) : 362 - 378
  • [35] Dual decomposition for multi-agent distributed optimization with coupling constraints*
    Falsone, Alessandro
    Margellos, Kostas
    Garatti, Simone
    Prandini, Maria
    AUTOMATICA, 2017, 84 : 149 - 158
  • [36] Distributed optimization via multi-agent systems
    Wang L.
    Lu K.-H.
    Guan Y.-Q.
    Kongzhi Lilun Yu Yingyong/Control Theory and Applications, 2019, 36 (11): : 1820 - 1833
  • [37] Multi-Agent Deep Reinforcement Learning for Large-Scale Traffic Signal Control
    Chu, Tianshu
    Wang, Jie
    Codeca, Lara
    Li, Zhaojian
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2020, 21 (03) : 1086 - 1095
  • [38] Distributed Newton Methods for Strictly Convex Consensus Optimization Problems in Multi-Agent Networks
    Wang, Dong
    Ren, Hualing
    Shao, Fubo
    SYMMETRY-BASEL, 2017, 9 (08):
  • [39] A Lagrange Multiplier Method for Distributed Optimization Based on Multi-Agent Network With Private and Shared Information
    Zhao, Yan
    Liu, Qingshan
    IEEE ACCESS, 2019, 7 : 83297 - 83305
  • [40] CPLNS: Cooperative Parallel Large Neighborhood Search for Large-Scale Multi-Agent Path Finding
    Chen, Kai
    Qu, Qingjun
    Zhu, Feng
    Yi, Zhengming
    Tang, Wenjie
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2024, 35 (11) : 2069 - 2086