Convergence analysis of gradient descent stochastic algorithms

被引:54
|
作者
Shapiro, A [1 ]
Wardi, Y [1 ]
机构
[1] GEORGIA INST TECHNOL,SCH ELECT & COMP ENGN,ATLANTA,GA 30332
关键词
gradient descent; subdifferentials; uniform laws of large numbers; infinitesimal perturbation analysis; discrete event dynamic systems;
D O I
10.1007/BF02190104
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
This paper proves convergence of a sample-path based stochastic gradient-descent algorithm for optimizing expected-value performance measures in discrete event systems. The algorithm uses increasing precision at successive iterations, and it moves against the direction of a generalized gradient of the computed sample performance function. Two convergence results are established: one, for the case where the expected-value function is continuously differentiable; and the other, when that function is nondifferentiable but the sample performance functions are convex. The proofs are based on a version of the uniform law of large numbers which is provable for many discrete event systems where infinitesimal perturbation analysis is known to be strongly consistent.
引用
收藏
页码:439 / 454
页数:16
相关论文
共 50 条
  • [21] The Convergence of Stochastic Gradient Descent in Asynchronous Shared Memory
    Alistarh, Dan
    De Sa, Christopher
    Konstantinov, Nikola
    PODC'18: PROCEEDINGS OF THE 2018 ACM SYMPOSIUM ON PRINCIPLES OF DISTRIBUTED COMPUTING, 2018, : 169 - 177
  • [22] On the Convergence of Decentralized Stochastic Gradient Descent With Biased Gradients
    Jiang, Yiming
    Kang, Helei
    Liu, Jinlan
    Xu, Dongpo
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2025, 73 : 549 - 558
  • [23] CONVERGENCE OF RIEMANNIAN STOCHASTIC GRADIENT DESCENT ON HADAMARD MANIFOLD
    Sakai, Hiroyuki
    Iiduka, Hideaki
    PACIFIC JOURNAL OF OPTIMIZATION, 2024, 20 (04): : 743 - 767
  • [24] Convergence of Stochastic Gradient Descent in Deep Neural Network
    Zhou, Bai-cun
    Han, Cong-ying
    Guo, Tian-de
    ACTA MATHEMATICAE APPLICATAE SINICA-ENGLISH SERIES, 2021, 37 (01): : 126 - 136
  • [25] Convergence behavior of diffusion stochastic gradient descent algorithm
    Barani, Fatemeh
    Savadi, Abdorreza
    Yazdi, Hadi Sadoghi
    SIGNAL PROCESSING, 2021, 183
  • [26] Convergence of Momentum-Based Stochastic Gradient Descent
    Jin, Ruinan
    He, Xingkang
    2020 IEEE 16TH INTERNATIONAL CONFERENCE ON CONTROL & AUTOMATION (ICCA), 2020, : 779 - 784
  • [27] On the Convergence of Stochastic Compositional Gradient Descent Ascent Method
    Gao, Hongchang
    Wang, Xiaoqian
    Luo, Lei
    Shi, Xinghua
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 2389 - 2395
  • [28] Understanding and Detecting Convergence for Stochastic Gradient Descent with Momentum
    Chee, Jerry
    Li, Ping
    2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2020, : 133 - 140
  • [29] Convergence of Stochastic Gradient Descent in Deep Neural Network
    Bai-cun ZHOU
    Cong-ying HAN
    Tian-de GUO
    ActaMathematicaeApplicataeSinica, 2021, 37 (01) : 126 - 136
  • [30] A CHARACTERIZATION OF STOCHASTIC MIRROR DESCENT ALGORITHMS AND THEIR CONVERGENCE PROPERTIES
    Azizan, Navid
    Hassibi, Babak
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 5167 - 5171