Distributed Stochastic Optimization Under a General Variance Condition

被引:1
|
作者
Huang, Kun [1 ]
Li, Xiao [1 ]
Pu, Shi [1 ]
机构
[1] Chinese Univ Hong Kong, Sch Data Sci, Shenzhen CUHK Shenzhen, Shenzhen 518172, Peoples R China
基金
中国国家自然科学基金;
关键词
Optimization; Linear programming; Distributed databases; Gradient methods; Convergence; Complexity theory; Particle measurements; Distributed optimization; nonconvex optimization; stochastic optimization; LEARNING-BEHAVIOR; CONVERGENCE;
D O I
10.1109/TAC.2024.3393169
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Distributed stochastic optimization has drawn great attention recently due to its effectiveness in solving large-scale machine learning problems. Although numerous algorithms have been proposed and successfully applied to general practical problems, their theoretical guarantees mainly rely on certain boundedness conditions on the stochastic gradients, varying from uniform boundedness to the relaxed growth condition. In addition, how to characterize the data heterogeneity among the agents and its impacts on the algorithmic performance remains challenging. In light of such motivations, we revisit the classical federated averaging algorithm (McMahan et al., 2017) as well as the more recent SCAFFOLD method (Karimireddy et al., 2020) for solving the distributed stochastic optimization problem and establish the convergence results under only a mild variance condition on the stochastic gradients for smooth nonconvex objective functions. Almost sure convergence to a stationary point is also established under the condition. Moreover, we discuss a more informative measurement for data heterogeneity as well as its implications.
引用
收藏
页码:6105 / 6120
页数:16
相关论文
共 50 条
  • [31] Adaptive Biased Stochastic Optimization
    Yang, Zhuang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2025, 47 (04) : 3067 - 3078
  • [32] Randomized Block Proximal Methods for Distributed Stochastic Big-Data Optimization
    Farina, Francesco
    Notarstefano, Giuseppe
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2021, 66 (09) : 4000 - 4014
  • [33] Distributed Optimization With Coupling Constraints
    Wu, Xuyang
    Wang, He
    Lu, Jie
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2023, 68 (03) : 1847 - 1854
  • [34] Online Stochastic Optimization of Networked Distributed Energy Resources
    Zhou, Xinyang
    Dall'Anese, Emiliano
    Chen, Lijun
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2020, 65 (06) : 2387 - 2401
  • [35] Nonparametric Compositional Stochastic Optimization for Risk-Sensitive Kernel Learning
    Bedi, Amrit Singh
    Koppel, Alec
    Rajawat, Ketan
    Sanyal, Panchajanya
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 : 428 - 442
  • [36] On the Influence of Bias-Correction on Distributed Stochastic Optimization
    Yuan, Kun
    Alghunaim, Sulaiman A.
    Ying, Bicheng
    Sayed, Ali H.
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2020, 68 : 4352 - 4367
  • [37] Linear Convergence of ADMM Under Metric Subregularity for Distributed Optimization
    Pan, Xiaowei
    Liu, Zhongxin
    Chen, Zengqiang
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2023, 68 (04) : 2513 - 2520
  • [38] General inertial proximal stochastic variance reduction gradient for nonconvex nonsmooth optimization
    Sun, Shuya
    He, Lulu
    JOURNAL OF INEQUALITIES AND APPLICATIONS, 2023, 2023 (01)
  • [39] Distributed Subgradient Algorithm for Multi-Agent Optimization With Dynamic Stepsize
    Ren, Xiaoxing
    Li, Dewei
    Xi, Yugeng
    Shao, Haibin
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2021, 8 (08) : 1451 - 1464
  • [40] General inertial proximal stochastic variance reduction gradient for nonconvex nonsmooth optimization
    Shuya Sun
    Lulu He
    Journal of Inequalities and Applications, 2023