Cross-Entropy Optimization of Control Policies With Adaptive Basis Functions

被引:49
作者
Busoniu, Lucian [1 ]
Ernst, Damien [2 ,3 ]
De Schutter, Bart [1 ,4 ]
Babuska, Robert [1 ]
机构
[1] Delft Univ Technol, Delft Ctr Syst & Control, NL-2628 CD Delft, Netherlands
[2] Belgian Natl Fund Sci Res FRS FNRS, B-1000 Brussels, Belgium
[3] Univ Liege, Syst & Modeling Res Unit, B-4000 Liege, Belgium
[4] Delft Univ Technol, Marine & Transport Technol Dept, NL-2628 CD Delft, Netherlands
来源
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS | 2011年 / 41卷 / 01期
关键词
Adaptive basis functions; cross-entropy optimization; direct policy search; Markov decision processes; GRADIENT METHODS; REINFORCEMENT;
D O I
10.1109/TSMCB.2010.2050586
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper introduces an algorithm for direct search of control policies in continuous-state discrete-actionMarkov decision processes. The algorithm looks for the best closed-loop policy that can be represented using a given number of basis functions (BFs), where a discrete action is assigned to each BF. The type of the BFs and their number are specified in advance and determine the complexity of the representation. Considerable flexibility is achieved by optimizing the locations and shapes of the BFs, together with the action assignments. The optimization is carried out with the cross-entropy method and evaluates the policies by their empirical return from a representative set of initial states. The return for each representative state is estimated using Monte Carlo simulations. The resulting algorithm for cross-entropy policy search with adaptive BFs is extensively evaluated in problems with two to six state variables, for which it reliably obtains good policies with only a small number of BFs. In these experiments, cross-entropy policy search requires vastly fewer BFs than value-function techniques with equidistant BFs, and outperforms policy search with a competing optimization algorithm called DIRECT.
引用
收藏
页码:196 / 209
页数:14
相关论文
共 37 条
[1]  
Adams BM, 2004, MATH BIOSCI ENG, V1, P223
[2]  
[Anonymous], 1996, Neuro-dynamic programming
[3]  
[Anonymous], 2003, J. Mach. Learn. Res.
[4]  
[Anonymous], 1998, P 15 INT C MACH LEAR
[5]  
[Anonymous], HDB EC
[6]   Random sampling of states in dynamic programming [J].
Atkeson, Christopher G. ;
Stephens, Benjamin J. .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2008, 38 (04) :924-929
[7]  
BARASH D, 1999, P AAAI SPRING S SEAR
[8]  
Bertsekas D.P., 2007, Dynamic Programming and Optimal Control, V2
[9]   Dynamic programming and suboptimal control: A survey from ADP to MPC [J].
Bertsekas, DP .
EUROPEAN JOURNAL OF CONTROL, 2005, 11 (4-5) :310-334
[10]  
BUSONIU L, 2008, LECT NOTES COMPUTER, P27