MUSIC: Accelerated Convergence for Distributed Optimization With Inexact and Exact Methods

被引:0
作者
Wu, Mou [1 ,2 ]
Liao, Haibin [3 ]
Ding, Zhengtao [4 ]
Xiao, Yonggang [1 ,2 ]
机构
[1] Hubei Univ Sci & Technol, Sch Comp Sci & Technol, Xianning 437100, Peoples R China
[2] Hubei Univ Sci & Technol, Lab Optoelect Informat & Intelligent Control, Xianning 437100, Peoples R China
[3] Wuhan Text Univ, Sch Elect & Elect Engn, Wuhan 430200, Peoples R China
[4] Univ Manchester, Dept Elect & Elect Engn, Manchester M13 9PL, England
基金
中国国家自然科学基金;
关键词
Convergence acceleration; distributed optimization; gradient descent; machine learning; multiple updates; CONVEX-OPTIMIZATION; ALGORITHMS; NETWORKS; EXTRA;
D O I
10.1109/TNNLS.2024.3376421
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Gradient-type distributed optimization methods have blossomed into one of the most important tools for solving a minimization learning task over a networked agent system. However, only one gradient update per iteration makes it difficult to achieve a substantive acceleration of convergence. In this article, we propose an accelerated framework named multiupdates single-combination (MUSIC) allowing each agent to perform multiple local updates and a single combination in each iteration. More importantly, we equip inexact and exact distributed optimization methods into this framework, thereby developing two new algorithms that exhibit accelerated linear convergence and high communication efficiency. Our rigorous convergence analysis reveals the sources of steady-state errors arising from inexact policies and offers effective solutions. Numerical results based on synthetic and real datasets demonstrate both our theoretical motivations and analysis, as well as performance advantages.
引用
收藏
页码:1 / 15
页数:15
相关论文
共 51 条
  • [31] Nesterov Y ..., 2018, Lectures on convex optimization, V137, DOI DOI 10.1007/978-3-319-91578-4
  • [32] Accelerated Distributed Nesterov Gradient Descent
    Qu, Guannan
    Li, Na
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2020, 65 (06) : 2566 - 2581
  • [33] Harnessing Smoothness to Accelerate Distributed Optimization
    Qu, Guannan
    Li, Na
    [J]. IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, 2018, 5 (03): : 1245 - 1260
  • [34] Distributed Stochastic Subgradient Projection Algorithms for Convex Optimization
    Ram, S. Sundhar
    Nedic, A.
    Veeravalli, V. V.
    [J]. JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2010, 147 (03) : 516 - 545
  • [35] Adaptation, Learning, and Optimization over Networks
    Sayed, Ali H.
    [J]. FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2014, 7 (4-5): : I - +
  • [36] Adaptive Networks
    Sayed, Ali H.
    [J]. PROCEEDINGS OF THE IEEE, 2014, 102 (04) : 460 - 497
  • [37] EXTRA: AN EXACT FIRST-ORDER ALGORITHM FOR DECENTRALIZED CONSENSUS OPTIMIZATION
    Shi, Wei
    Ling, Qing
    Wu, Gang
    Yin, Wotao
    [J]. SIAM JOURNAL ON OPTIMIZATION, 2015, 25 (02) : 944 - 966
  • [38] Decentralized Prediction-Correction Methods for Networked Time-Varying Convex Optimization
    Simonetto, Andrea
    Koppel, Alec
    Mokhtari, Aryan
    Leus, Geert
    Ribeiro, Alejandro
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2017, 62 (11) : 5724 - 5738
  • [39] A Class of Prediction-Correction Methods for Time-Varying Convex Optimization
    Simonetto, Andrea
    Mokhtari, Aryan
    Koppel, Alec
    Leus, Geert
    Ribeiro, Alejandro
    [J]. IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2016, 64 (17) : 4576 - 4591
  • [40] Stich Sebastian U., 2019, P INT C LEARN REPR