Temporal-difference emphasis learning with regularized correction for off-policy evaluation and control

被引:0
|
作者
Cao, Jiaqing [1 ]
Liu, Quan [1 ]
Wu, Lan [1 ]
Fu, Qiming [2 ]
Zhong, Shan [3 ]
机构
[1] Soochow Univ, Sch Comp Sci & Technol, Suzhou 215006, Peoples R China
[2] Suzhou Univ Sci & Technol, Sch Elect & Informat Engn, Suzhou 215009, Peoples R China
[3] Changshu Inst Technol, Sch Comp Sci & Engn, Changshu 215500, Peoples R China
基金
中国国家自然科学基金;
关键词
Reinforcement learning; Off-policy learning; Emphatic approach; Gradient temporal-difference learning; Gradient emphasis learning;
D O I
10.1007/s10489-023-04579-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Off-policy learning, where the goal is to learn about a policy of interest while following a different behavior policy, constitutes an important class of reinforcement learning problems. It is well-known that emphatic temporal-difference (TD) learning is a pioneering off-policy reinforcement learning method involving the use of the followon trace. Although the gradient emphasis learning (GEM) algorithm has recently been proposed to fix the problems of unbounded variance and large emphasis approximation error introduced by the followon trace from the perspective of stochastic approximation. This approach, however, is limited to a single gradient-TD2-style update instead of considering the update rules of other GTD algorithms. Overall, it remains an open question on how to better learn the emphasis for off-policy learning. In this paper, we rethink GEM and investigate introducing a novel two-time-scale algorithm called TD emphasis learning with gradient correction (TDEC) to learn the true emphasis. Further, we regularize the update to the secondary learning process of TDEC and obtain our final TD emphasis learning with regularized correction (TDERC) algorithm. We then apply the emphasis estimated by the proposed emphasis learning algorithms to the value estimation gradient and the policy gradient, respectively, yielding the corresponding emphatic TD variants for off-policy evaluation and actor-critic algorithms for off-policy control. Finally, we empirically demonstrate the advantage of the proposed algorithms on a small domain as well as challenging Mujoco robot simulation tasks. Taken together, we hope that our work can provide new insights into the development of a better alternative in the family of off-policy emphatic algorithms.
引用
收藏
页码:20917 / 20937
页数:21
相关论文
共 50 条
  • [21] Off-policy Learning With Eligibility Traces: A Survey
    Geist, Matthieu
    Scherrer, Bruno
    JOURNAL OF MACHINE LEARNING RESEARCH, 2014, 15 : 289 - 333
  • [22] Fast Link Scheduling in Wireless Networks Using Regularized Off-Policy Reinforcement Learning
    Bhattacharya, Sagnik
    Banerjee, Ayan
    Peruru, Subrahmanya Swamy
    Srinivas, Kothapalli Venkata
    IEEE Networking Letters, 2023, 5 (02): : 86 - 90
  • [23] Off-policy evaluation for tabular reinforcement learning with synthetic trajectories
    Weiwei Wang
    Yuqiang Li
    Xianyi Wu
    Statistics and Computing, 2024, 34
  • [24] Event-Driven Off-Policy Reinforcement Learning for Control of Interconnected Systems
    Narayanan, Vignesh
    Modares, Hamidreza
    Jagannathan, Sarangapani
    Lewis, Frank L.
    IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (03) : 1936 - 1946
  • [25] Off-policy evaluation for tabular reinforcement learning with synthetic trajectories
    Wang, Weiwei
    Li, Yuqiang
    Wu, Xianyi
    STATISTICS AND COMPUTING, 2024, 34 (01)
  • [26] On Generalized Bellman Equations and Temporal-Difference Learning
    Yu, Huizhen
    Mahmood, A. Rupam
    Sutton, Richard S.
    JOURNAL OF MACHINE LEARNING RESEARCH, 2018, 19
  • [27] An analysis of temporal-difference learning with function approximation
    Tsitsiklis, JN
    VanRoy, B
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 1997, 42 (05) : 674 - 690
  • [28] An Optimistic Approach to the Temporal Difference Error in Off-Policy Actor-Critic Algorithms
    Saglam, Baturay
    Mutlu, Furkan B.
    Kozat, Suleyman S.
    2022 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2022, : 875 - 883
  • [29] Off-policy Learning for Multiple Loggers
    He, Li
    Xia, Long
    Zeng, Wei
    Ma, Zhi-Ming
    Zhao, Yihong
    Yin, Dawei
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 1184 - 1193
  • [30] Mixed experience sampling for off-policy reinforcement learning
    Yu, Jiayu
    Li, Jingyao
    Lu, Shuai
    Han, Shuai
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 251