Reinforcement learning and optimal adaptive control: An overview and implementation examples

被引:162
作者
Khan, Said G. [1 ]
Herrmann, Guido [2 ,3 ]
Lewis, Frank L. [4 ]
Pipe, Tony [1 ]
Melhuish, Chris [5 ]
机构
[1] Univ W England, Bristol Robot Lab, Bristol BS16 1QY, Avon, England
[2] Univ Bristol, Bristol Robot Lab, Bristol, Avon, England
[3] Univ Bristol, Dept Mech Engn, Bristol, Avon, England
[4] Univ Texas Arlington, Automat & Robot Res Inst, Arlington, TX USA
[5] Univ Bristol, Bristol Robot Lab, Bristol, Avon, England
基金
美国国家科学基金会;
关键词
Reinforcement learning; ADP; Q-learning; Optimal adaptive control; ADP; SYSTEMS;
D O I
10.1016/j.arcontrol.2012.03.004
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper provides an overview of the reinforcement learning and optimal adaptive control literature and its application to robotics. Reinforcement learning is bridging the gap between traditional optimal control, adaptive control and bio-inspired learning techniques borrowed from animals. This work is highlighting some of the key techniques presented by well known researchers from the combined areas of reinforcement learning and optimal control theory. At the end, an example of an implementation of a novel model-free Q-learning based discrete optimal adaptive controller for a humanoid robot arm is presented. The controller uses a novel adaptive dynamic programming (ADP) reinforcement learning (RI) approach to develop an optimal policy on-line. The RI joint space tracking controller was implemented for two links (shoulder flexion and elbow flexion joints) of the arm of the humanoid Bristol-Elumotion-Robotic-Torso II (BERT II) torso. The constrained case (joint limits) of the RL scheme was tested for a single link (elbow flexion) of the BERT II arm by modifying the cost function to deal with the extra nonlinearity due to the joint constraints. (C) 2012 Elsevier Ltd. All rights reserved.
引用
收藏
页码:42 / 59
页数:18
相关论文
共 115 条
[71]   Reinforcement Learning for Partially Observable Dynamic Processes: Adaptive Dynamic Programming Using Measured Output Data [J].
Lewis, F. L. ;
Vamvoudakis, Kyriakos G. .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2011, 41 (01) :14-25
[72]  
Lewis FL, 2003, Robot manipulator control: theory and practice
[73]   Reinforcement Learning and Adaptive Dynamic Programming for Feedback Control [J].
Lewis, Frank L. ;
Vrabie, Draguna .
IEEE CIRCUITS AND SYSTEMS MAGAZINE, 2009, 9 (03) :32-50
[74]  
Liu W, 2010, DES AUT TEST EUROPE, P602
[75]  
Liu X, 2000, P AMER CONTR CONF, P1929, DOI 10.1109/ACC.2000.879538
[76]  
Maei HamidReza., 2010, P 3 C ARTIFICIAL GEN, P1
[77]   Incremental multi-step Q-learning [J].
Peng, J ;
Williams, RJ .
MACHINE LEARNING, 1996, 22 (1-3) :283-290
[78]   Natural Actor-Critic [J].
Peters, Jan ;
Schaal, Stefan .
NEUROCOMPUTING, 2008, 71 (7-9) :1180-1190
[79]   Learning to control in operational space [J].
Peters, Jan ;
Schaal, Steffan .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2008, 27 (02) :197-212
[80]  
Peters Jan., 2003, 3 IEEE RAS INT C HUM