Risk-sensitive reinforcement learning applied to control under constraints

被引:174
作者
Geibel, P [1 ]
Wysotzki, F
机构
[1] Univ Osnabruck, AI Grp, Inst Cognit Sci, D-4500 Osnabruck, Germany
[2] TU Berlin, AI Grp, Fac Elect Engn & Comp Sci, Berlin, Germany
关键词
D O I
10.1613/jair.1666
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we consider Markov Decision Processes (MDPs) with error states. Error states are those states entering which is undesirable or dangerous. We define the risk with respect to a policy as the probability of entering such a state when the policy is pursued. We consider the problem of finding good policies whose risk is smaller than some user-specified threshold, and formalize it as a constrained MDP with two criteria. The first criterion corresponds to the value function originally given. We will show that the risk can be formulated as a second criterion function based on a cumulative return, whose definition is independent of the original value function. We present a model free, heuristic reinforcement learning algorithm that aims at finding good deterministic policies. It is based on weighting the original value function and the risk. The weight parameter is adapted in order to find a feasible solution for the constrained problem that has a good performance with respect to the value function. The algorithm was successfully applied to the control of a feed tank with stochastic inflows that lies upstream of a distillation column. This control task was originally formulated as an optimal control problem with chance constraints, and it was solved under certain assumptions on the model to obtain an optimal solution. The power of our learning algorithm is that it can be used even when some of these restrictive assumptions are relaxed.
引用
收藏
页码:81 / 108
页数:28
相关论文
共 39 条
[1]  
Altman E., 1999, STOCH MODEL SER
[2]  
[Anonymous], MACHINE LEARNING
[3]  
[Anonymous], THESIS KINGS COLL OX
[4]  
Baird L, 1995, MACHINE LEARNING P 1, P30
[5]  
BAWAS VS, 1975, J FINANC, V2, P1975
[6]  
Bertsekas D., 2012, Dynamic Programming and Optimal Control, V1
[7]  
Bertsekas D. P., 1996, Neuro Dynamic Programming, V1st
[8]  
Bertsekas DP, 1995, Dynamic Programming and Optimal Control, V2
[9]  
Bishop C. M., 1996, Neural networks for pattern recognition
[10]  
Blythe J., 1999, AI Magazine, V20, P37