Feature Selection and Feature Learning for High-dimensional Batch Reinforcement Learning: A Survey

被引:27
作者
Liu, De-Rong [1 ]
Li, Hong-Liang [1 ]
Wang, Ding [1 ]
机构
[1] Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Intelligent control; reinforcement learning; adaptive dynamic programming; feature selection; feature learning; big data;
D O I
10.1007/s11633-015-0893-y
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Tremendous amount of data are being generated and saved in many complex engineering and social systems every day. It is significant and feasible to utilize the big data to make better decisions by machine learning techniques. In this paper, we focus on batch reinforcement learning (RL) algorithms for discounted Markov decision processes (MDPs) with large discrete or continuous state spaces, aiming to learn the best possible policy given a fixed amount of training data. The batch RL algorithms with handcrafted feature representations work well for low-dimensional MDPs. However, for many real-world RL tasks which often involve high-dimensional state spaces, it is difficult and even infeasible to use feature engineering methods to design features for value function approximation. To cope with high-dimensional RL problems, the desire to obtain data-driven features has led to a lot of works in incorporating feature selection and feature learning into traditional batch RL algorithms. In this paper, we provide a comprehensive survey on automatic feature selection and unsupervised feature learning for high-dimensional batch RL. Moreover, we present recent theoretical developments on applying statistical learning to establish finite-sample error bounds for batch RL algorithms based on weighted L-p norms. Finally, we derive some future directions in the research of RL algorithms, theories and applications.
引用
收藏
页码:229 / 242
页数:14
相关论文
共 113 条
[1]  
Abtahi F., 2011, P IEEE ICDL EPIROB F
[2]   Optimal Approximation Schedules for a Class of Iterative Algorithms, With an Application to Multigrid Value Iteration [J].
Almudevar, Anthony ;
de Arruda, Edilson Fernandes .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2012, 57 (12) :3132-3146
[3]  
[Anonymous], 2006, AAAI
[4]  
[Anonymous], 2005, P 22 INT C MACH LEAR, DOI DOI 10.1145/1102351.1102421
[5]  
Antoniou A., 2007, PROC NEURAL INF PROC, P1, DOI 10.1007/978-0-387-71107-2_1
[6]   Learning near-optimal policies with Bellman-residual minimization based fitted policy iteration and a single sample path [J].
Antos, Andras ;
Szepesvari, Csaba ;
Munos, Remi .
MACHINE LEARNING, 2008, 71 (01) :89-129
[7]   Value-iteration based fitted policy iteration:: Learning with a single trajectory [J].
Antos, Andras ;
Szepesvari, Csaba ;
Munos, Remi .
2007 IEEE INTERNATIONAL SYMPOSIUM ON APPROXIMATE DYNAMIC PROGRAMMING AND REINFORCEMENT LEARNING, 2007, :330-+
[8]   Deep Machine Learning-A New Frontier in Artificial Intelligence Research [J].
Arel, Itamar ;
Rose, Derek C. ;
Karnowski, Thomas P. .
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2010, 5 (04) :13-18
[9]   Laplacian eigenmaps for dimensionality reduction and data representation [J].
Belkin, M ;
Niyogi, P .
NEURAL COMPUTATION, 2003, 15 (06) :1373-1396
[10]  
Bellman R., 1957, DYNAMIC PROGRAMMING