Cross-Entropy Optimization of Control Policies With Adaptive Basis Functions

被引:50
作者
Busoniu, Lucian [1 ]
Ernst, Damien [2 ,3 ]
De Schutter, Bart [1 ,4 ]
Babuska, Robert [1 ]
机构
[1] Delft Univ Technol, Delft Ctr Syst & Control, NL-2628 CD Delft, Netherlands
[2] Belgian Natl Fund Sci Res FRS FNRS, B-1000 Brussels, Belgium
[3] Univ Liege, Syst & Modeling Res Unit, B-4000 Liege, Belgium
[4] Delft Univ Technol, Marine & Transport Technol Dept, NL-2628 CD Delft, Netherlands
来源
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS | 2011年 / 41卷 / 01期
关键词
Adaptive basis functions; cross-entropy optimization; direct policy search; Markov decision processes; GRADIENT METHODS; REINFORCEMENT;
D O I
10.1109/TSMCB.2010.2050586
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper introduces an algorithm for direct search of control policies in continuous-state discrete-actionMarkov decision processes. The algorithm looks for the best closed-loop policy that can be represented using a given number of basis functions (BFs), where a discrete action is assigned to each BF. The type of the BFs and their number are specified in advance and determine the complexity of the representation. Considerable flexibility is achieved by optimizing the locations and shapes of the BFs, together with the action assignments. The optimization is carried out with the cross-entropy method and evaluates the policies by their empirical return from a representative set of initial states. The return for each representative state is estimated using Monte Carlo simulations. The resulting algorithm for cross-entropy policy search with adaptive BFs is extensively evaluated in problems with two to six state variables, for which it reliably obtains good policies with only a small number of BFs. In these experiments, cross-entropy policy search requires vastly fewer BFs than value-function techniques with equidistant BFs, and outperforms policy search with a competing optimization algorithm called DIRECT.
引用
收藏
页码:196 / 209
页数:14
相关论文
共 37 条
[21]  
Lagoudakis MichailG., 2003, P 20 INT C MACHINE L, P424
[22]   Adaptive critic learning techniques for engine torque and air-fuel ratio control [J].
Liu, Derong ;
Javaherian, Hossein ;
Kovalenko, Olesia ;
Huang, Ting .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2008, 38 (04) :988-993
[23]  
Mahadevan S, 2007, J MACH LEARN RES, V8, P2169
[24]  
Mannor S., 2003, Proceedings of the Twentieth International Conference on International Conference on Machine Learning, ICML'03, V20, P512
[25]   Approximate gradient methods in policy-space optimization of Markov reward processes [J].
Marbach, P ;
Tsitsiklis, JN .
DISCRETE EVENT DYNAMIC SYSTEMS-THEORY AND APPLICATIONS, 2003, 13 (1-2) :111-148
[26]   Basis function adaptation in temporal difference reinforcement learning [J].
Menache, I ;
Mannor, S ;
Shimkin, N .
ANNALS OF OPERATIONS RESEARCH, 2005, 134 (01) :215-238
[27]   Variable resolution discretization in optimal control [J].
Munos, R ;
Moore, A .
MACHINE LEARNING, 2002, 49 (2-3) :291-323
[28]  
Munos R, 2006, J MACH LEARN RES, V7, P771
[29]  
Ng A. Y., 2000, P 16 C UNC ART INT, P406
[30]   Kernel-based reinforcement learning [J].
Ormoneit, D ;
Sen, S .
MACHINE LEARNING, 2002, 49 (2-3) :161-178