Risk-sensitive reinforcement learning algorithms with generalized average criterion

被引:0
作者
殷苌茗 [1 ]
王汉兴 [2 ]
赵飞 [2 ]
机构
[1] College of Computer and Communicational Engineering,Changsha University of Science and Technology
[2] College of Sciences,Shanghai University
关键词
reinforcement learning; risk-sensitive; generalized average; algorithm; convergence;
D O I
暂无
中图分类号
TP181 [自动推理、机器学习];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
<正>A new algorithm is proposed, which immolates the optimality of control policies potentially to obtain the robusticity of solutions. The robusticity of solutions maybe becomes a very important property for a learning system when there exists non-matching between theory models and practical physical system, or the practical system is not static, or the availability of a control action changes along with the variety of time. The main contribution is that a set of approximation algorithms and their convergence results are given. A generalized average operator instead of the general optimal operator max (or min) is applied to study a class of important learning algorithms, dynamic programming algorithms, and discuss their convergences from theoretic point of view. The purpose for this research is to improve the robusticity of reinforcement learning algorithms theoretically.
引用
收藏
页码:405 / 416
页数:12
相关论文
共 3 条
[1]  
Incremental multi-step Q-learning[J] . Jing Peng,Ronald J. Williams.Machine Learning . 1996 (1)
[2]  
Technical Note: Q-Learning[J] . Christopher J.C.H. Watkins,Peter Dayan.Machine Learning . 1992 (3)
[3]  
Learning to predict by the methods of temporal differences[J] . Richard S. Sutton.Machine Learning . 1988 (1)