Temporal-difference learning with nonlinear function approximation: lazy training and mean field regimes

被引:0
|
作者
Agazzi, Andrea [1 ]
Lu, Jianfeng [1 ,2 ,3 ]
机构
[1] Duke Univ, Dept Math, Durham, NC 27708 USA
[2] Duke Univ, Dept Phys, Durham, NC 27708 USA
[3] Duke Univ, Dept Chem, Durham, NC 27708 USA
来源
MATHEMATICAL AND SCIENTIFIC MACHINE LEARNING, VOL 145 | 2021年 / 145卷
关键词
Reinforcement learning; neural networks; temporal-difference learning; mean-field; lazy training; REINFORCEMENT; GO;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We discuss the approximation of the value function for infinite-horizon discounted Markov Reward Processes (MRP) with wide neural networks trained with the Temporal-Difference (TD) learning algorithm. We first consider this problem under a certain scaling of the approximating function, leading to a regime called lazy training. In this regime, which arises naturally, implicit in the initialization of the neural network, the parameters of the model vary only slightly during the learning process, resulting in approximately linear behavior of the model. Both in the under- and over-parametrized frameworks, we prove exponential convergence to local, respectively global minimizers of the TD learning algorithm in the lazy training regime. We then compare the above scaling with the alternative mean-field scaling, where the approximately linear behavior of the model is lost. In this nonlinear, mean-field regime we prove that all fixed points of the dynamics in parameter space are global minimizers. We finally give examples of our convergence results in the case of models that diverge if trained with non-lazy TD learning.
引用
收藏
页码:37 / 74
页数:38
相关论文
共 50 条
  • [1] An analysis of temporal-difference learning with function approximation
    Tsitsiklis, JN
    VanRoy, B
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 1997, 42 (05) : 674 - 690
  • [2] On the convergence of temporal-difference learning with linear function approximation
    Tadic, V
    MACHINE LEARNING, 2001, 42 (03) : 241 - 267
  • [3] On the Convergence of Temporal-Difference Learning with Linear Function Approximation
    Vladislav Tadić
    Machine Learning, 2001, 42 : 241 - 267
  • [4] Finite-Time Performance of Distributed Temporal-Difference Learning with Linear Function Approximation
    Doan, Thinh T.
    Maguluri, Siva Theja
    Romberg, Justin
    SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE, 2021, 3 (01): : 298 - 320
  • [5] An Analysis of Quantile Temporal-Difference Learning
    Rowland, Mark
    Munos, Remi
    Azar, Mohammad Gheshlaghi
    Tang, Yunhao
    Ostrovski, Georg
    Harutyunyan, Anna
    Tuyls, Karl
    Bellemare, Marc G.
    Dabney, Will
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25
  • [6] A Generalized Kalman Filter for Fixed Point Approximation and Efficient Temporal-Difference Learning
    David Choi
    Benjamin Van Roy
    Discrete Event Dynamic Systems, 2006, 16 : 207 - 239
  • [7] A generalized Kalman filter for fixed point approximation and efficient temporal-difference learning
    Choi, David
    Van Roy, Benjamin
    DISCRETE EVENT DYNAMIC SYSTEMS-THEORY AND APPLICATIONS, 2006, 16 (02): : 207 - 239
  • [8] Sample Complexity and Overparameterization Bounds for Temporal-Difference Learning With Neural Network Approximation
    Cayci, Semih
    Satpathi, Siddhartha
    He, Niao
    Srikant, R.
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2023, 68 (05) : 2891 - 2905
  • [9] Adaptive temporal-difference learning via deep neural network function approximation: a non-asymptotic analysis
    Wang, Guoyong
    Fu, Tiange
    Zheng, Ruijuan
    Zhao, Xuhui
    Zhu, Junlong
    Zhang, Mingchuan
    COMPLEX & INTELLIGENT SYSTEMS, 2025, 11 (02)
  • [10] Average cost temporal-difference learning
    Tsitsiklis, JN
    Van Roy, B
    AUTOMATICA, 1999, 35 (11) : 1799 - 1808