Differential TD Learning for Value Function Approximation

被引:0
|
作者
Devraj, Adithya M. [1 ]
Meyn, Sean P. [1 ]
机构
[1] Univ Florida, Dept Elect & Comp Engn, Gainesville, FL 32611 USA
来源
2016 IEEE 55TH CONFERENCE ON DECISION AND CONTROL (CDC) | 2016年
基金
美国国家科学基金会;
关键词
Reinforcement learning; Approximate dynamic programming; Poisson's equation; Stochastic optimal control;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Value functions arise as a component of algorithms as well as performance metrics in statistics and engineering applications. Computation of the associated Bellman equations is numerically challenging in all but a few special cases. A popular approximation technique is known as Temporal Difference (TD) learning. The algorithm introduced in this paper is intended to resolve two well-known problems with this approach: In the discounted-cost setting, the variance of the algorithm diverges as the discount factor approaches unity. Second, for the average cost setting, unbiased algorithms exist only in special cases. It is shown that the gradient of any of these value functions admits a representation that lends itself to algorithm design. Based on this result, the new differential TD method is obtained for Markovian models on Euclidean space with smooth dynamics. Numerical examples show remarkable improvements in performance. In application to speed scaling, variance is reduced by two orders of magnitude.
引用
收藏
页码:6347 / 6354
页数:8
相关论文
共 50 条
  • [1] Offline Reinforcement Learning: Fundamental Barriers for Value Function Approximation
    Foster, Dylan J.
    Krishnamurthy, Akshay
    Simchi-Levi, David
    Xu, Yunzong
    CONFERENCE ON LEARNING THEORY, VOL 178, 2022, 178
  • [2] A Simple Finite-Time Analysis of TD Learning With Linear Function Approximation
    Mitra, Aritra
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2025, 70 (02) : 1388 - 1394
  • [3] Adaptive resolution function approximation for TD-learning : Simple division and integration
    Kobayashi, Y
    Hosoe, S
    SICE 2003 ANNUAL CONFERENCE, VOLS 1-3, 2003, : 2016 - 2021
  • [4] A grey approximation approach to state value function in reinforcement learning
    Hwang, Kao-Shing
    Chen, Yu-Jen
    Lee, Guar-Yuan
    2007 IEEE INTERNATIONAL CONFERENCE ON INTEGRATION TECHNOLOGY, PROCEEDINGS, 2007, : 379 - +
  • [5] Distributed Value Function Approximation for Collaborative Multiagent Reinforcement Learning
    Stankovic, Milos S.
    Beko, Marko
    Stankovic, Srdjan S.
    IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, 2021, 8 (03): : 1270 - 1280
  • [6] A Clustering-Based Graph Laplacian Framework for Value Function Approximation in Reinforcement Learning
    Xu, Xin
    Huang, Zhenhua
    Graves, Daniel
    Pedrycz, Witold
    IEEE TRANSACTIONS ON CYBERNETICS, 2014, 44 (12) : 2613 - 2625
  • [7] Efficient exploration through active learning for value function approximation in reinforcement learning
    Akiyama, Takayuki
    Hachiya, Hirotaka
    Sugiyama, Masashi
    NEURAL NETWORKS, 2010, 23 (05) : 639 - 648
  • [8] ON CONVERGENCE RATE OF ADAPTIVE MULTISCALE VALUE FUNCTION APPROXIMATION FOR REINFORCEMENT LEARNING
    Li, Tao
    Zhu, Quanyan
    2019 IEEE 29TH INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2019,
  • [9] The Divergence of Reinforcement Learning Algorithms with Value-Iteration and Function Approximation
    Fairbank, Michael
    Alonso, Eduardo
    2012 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2012,
  • [10] Pseudorehearsal in Value Function Approximation
    Marochko, Vladimir
    Johard, Leonard
    Mazzara, Manuel
    AGENT AND MULTI-AGENT SYSTEMS: TECHNOLOGY AND APPLICATIONS, 2018, 74 : 178 - 189