Feature Selection and Feature Learning for High-dimensional Batch Reinforcement Learning: A Survey

被引:27
作者
Liu, De-Rong [1 ]
Li, Hong-Liang [1 ]
Wang, Ding [1 ]
机构
[1] Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Intelligent control; reinforcement learning; adaptive dynamic programming; feature selection; feature learning; big data;
D O I
10.1007/s11633-015-0893-y
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Tremendous amount of data are being generated and saved in many complex engineering and social systems every day. It is significant and feasible to utilize the big data to make better decisions by machine learning techniques. In this paper, we focus on batch reinforcement learning (RL) algorithms for discounted Markov decision processes (MDPs) with large discrete or continuous state spaces, aiming to learn the best possible policy given a fixed amount of training data. The batch RL algorithms with handcrafted feature representations work well for low-dimensional MDPs. However, for many real-world RL tasks which often involve high-dimensional state spaces, it is difficult and even infeasible to use feature engineering methods to design features for value function approximation. To cope with high-dimensional RL problems, the desire to obtain data-driven features has led to a lot of works in incorporating feature selection and feature learning into traditional batch RL algorithms. In this paper, we provide a comprehensive survey on automatic feature selection and unsupervised feature learning for high-dimensional batch RL. Moreover, we present recent theoretical developments on applying statistical learning to establish finite-sample error bounds for batch RL algorithms based on weighted L-p norms. Finally, we derive some future directions in the research of RL algorithms, theories and applications.
引用
收藏
页码:229 / 242
页数:14
相关论文
共 113 条
[11]  
Bengio Y., 2006, P ADV NEUR INF PROC, V19, P153
[12]   Representation Learning: A Review and New Perspectives [J].
Bengio, Yoshua ;
Courville, Aaron ;
Vincent, Pascal .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) :1798-1828
[13]   Approximate policy iteration: A survey and some new methods [J].
Bertsekas D.P. .
Journal of Control Theory and Applications, 2011, 9 (3) :310-335
[14]  
Bertsekas D. P., 2012, LIDSP2884 LAB INF DE
[15]  
Bertsekas D. P., 1996, NEURO DYNAMIC PROGRA
[16]  
Bohmer W, 2013, J MACH LEARN RES, V14, P2067
[17]  
Borkar V. S., 2008, STOCHASTIC APPROXIMA
[18]   Technical update: Least-squares temporal difference learning [J].
Boyan, JA .
MACHINE LEARNING, 2002, 49 (2-3) :233-246
[19]  
Bradtke SJ, 1996, MACH LEARN, V22, P33, DOI 10.1007/BF00114723
[20]  
Busoniu L, 2010, AUTOM CONTROL ENG SE, P1, DOI 10.1201/9781439821091-f