Recursive learning automata approach to Markov decision processes

被引:6
作者
Chang, Hyeong Soo [1 ]
Fu, Michael C.
Hu, Jiaqiao
Marcus, Steven I.
机构
[1] Sogang Univ, Dept Comp Sci & Engn, Seoul 121742, South Korea
[2] Univ Maryland, Robert H Smith Sch Business, College Pk, MD 20742 USA
[3] Univ Maryland, Syst Res Inst, College Pk, MD 20742 USA
[4] SUNY Stony Brook, Dept Appl Math & Stat, Stony Brook, NY 11794 USA
[5] Univ Maryland, Dept Elect & Comp Engn, College Pk, MD 20742 USA
关键词
learning automata; Markov decision process (MDP); sampling;
D O I
10.1109/TAC.2007.900859
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this note, we present a sampling algorithm, called recursive automata sampling algorithm (RASA), for control of finite-horizon Markov decision processes (MDPs). By extending in a recursive manner Sastry's learning automata pursuit algorithm designed for solving nonsequential stochastic optimization problems, RASA returns an estimate of both the optimal action from a given state and the corresponding optimal value. Based on the finite-time analysis of the pursuit algorithm by Rajaraman and Sastry, we provide an analysis for the finite-time behavior of RASA. Specifically, for a given initial state, we derive the following probability bounds as a function of the number of samples: 1) a lower bound on the probability that RASA will sample the optimal action and 2) an upper bound on the probability that the deviation between the true optimal value and the RASA estimate exceeds a given error.
引用
收藏
页码:1349 / 1355
页数:7
相关论文
共 18 条
[1]  
Ahamed TPI, 2002, ELECTR POW SYST RES, V63, P9, DOI 10.1016/S0378-7796(02)00088-3
[2]  
[Anonymous], THESIS INDIAN I SCI
[3]  
[Anonymous], 2007, Simulation-based algorithms for Markov decision processes
[4]   Infinite-horizon policy-gradient estimation [J].
Baxter, J ;
Bartlett, PL .
JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2001, 15 :319-350
[5]  
Bertsekas D., 1996, NEURO DYNAMIC PROGRA, V1st
[6]   An adaptive sampling algorithm for solving Markov decision processes [J].
Chang, HS ;
Fu, MC ;
Hu, JQ ;
Marcus, SI .
OPERATIONS RESEARCH, 2005, 53 (01) :126-139
[7]   An asymptotically efficient simulation-based algorithm for finite horizon stochastic dynamic programming [J].
Chang, Hyeong Soo ;
Fu, Michael C. ;
Hu, Jiaqiao ;
Marcus, Steven I. .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2007, 52 (01) :89-94
[8]  
HERNACNDEZLERMA O, 1989, ADAPTIVE MARKOV CONT
[9]   Simulation-based uniform value function estimates of Markov decision processes [J].
Jain, Rahul ;
Varaiya, Pravin P. .
SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2006, 45 (05) :1633-1656
[10]  
POZNYAK AS, 2000, SELF LEARNIGN CONTRO