A Convergent Off-Policy Temporal Difference Algorithm

被引:1
作者
Diddigi, Raghuram Bharadwaj [1 ]
Kamanchi, Chandramouli [1 ]
Bhatnagar, Shalabh [1 ]
机构
[1] Indian Inst Sci IISc, Dept Comp Sci & Automat CSA, Bangalore, Karnataka, India
来源
ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE | 2020年 / 325卷
关键词
D O I
10.3233/FAIA200207
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning the value function of a given policy (target policy) from the data samples obtained from a different policy (behavior policy) is an important problem in Reinforcement Learning (RL). This problem is studied under the setting of off-policy prediction. Temporal Difference (TD) learning algorithms are a popular class of algorithms for solving the prediction problem. TD algorithms with linear function approximation are shown to be convergent when the samples are generated from the target policy (known as on-policy prediction). However, it has been well established in the literature that off-policy TD algorithms under linear function approximation may diverge. In this work, we propose a convergent on-line off-policy TD algorithm under linear function approximation. The main idea is to penalize the updates of the algorithm in a way as to ensure convergence of the iterates. We provide a convergence analysis of our algorithm. Through numerical evaluations, we further demonstrate the effectiveness of our algorithm.
引用
收藏
页码:1103 / 1110
页数:8
相关论文
共 24 条
[1]  
[Anonymous], 2015, C LEARNING THEORY, P1724
[2]  
[Anonymous], 2009, P 26 ANN INT C MACH
[3]  
[Anonymous], 1996, NEURODYNAMIC PROGRAM
[4]  
[Anonymous], 2012, ADV NEURAL INFORM PR
[5]  
[Anonymous], 2016, J MACHINE LEARNING R
[6]  
Baird L., 1995, Machine Learning. Proceedings of the Twelfth International Conference on Machine Learning, P30
[7]  
Benveniste Albert, 2012, ADAPTIVE ALGORITHMS, V22
[8]  
Bhatnagar S., 2009, NIPS, P1204
[9]  
Bradtke SJ, 1996, MACH LEARN, V22, P33, DOI 10.1007/BF00114723
[10]  
Gelada C, 2019, AAAI CONF ARTIF INTE, P3647