Reinforcement learning and optimal adaptive control: An overview and implementation examples

被引:162
作者
Khan, Said G. [1 ]
Herrmann, Guido [2 ,3 ]
Lewis, Frank L. [4 ]
Pipe, Tony [1 ]
Melhuish, Chris [5 ]
机构
[1] Univ W England, Bristol Robot Lab, Bristol BS16 1QY, Avon, England
[2] Univ Bristol, Bristol Robot Lab, Bristol, Avon, England
[3] Univ Bristol, Dept Mech Engn, Bristol, Avon, England
[4] Univ Texas Arlington, Automat & Robot Res Inst, Arlington, TX USA
[5] Univ Bristol, Bristol Robot Lab, Bristol, Avon, England
基金
美国国家科学基金会;
关键词
Reinforcement learning; ADP; Q-learning; Optimal adaptive control; ADP; SYSTEMS;
D O I
10.1016/j.arcontrol.2012.03.004
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper provides an overview of the reinforcement learning and optimal adaptive control literature and its application to robotics. Reinforcement learning is bridging the gap between traditional optimal control, adaptive control and bio-inspired learning techniques borrowed from animals. This work is highlighting some of the key techniques presented by well known researchers from the combined areas of reinforcement learning and optimal control theory. At the end, an example of an implementation of a novel model-free Q-learning based discrete optimal adaptive controller for a humanoid robot arm is presented. The controller uses a novel adaptive dynamic programming (ADP) reinforcement learning (RI) approach to develop an optimal policy on-line. The RI joint space tracking controller was implemented for two links (shoulder flexion and elbow flexion joints) of the arm of the humanoid Bristol-Elumotion-Robotic-Torso II (BERT II) torso. The constrained case (joint limits) of the RL scheme was tested for a single link (elbow flexion) of the BERT II arm by modifying the cost function to deal with the extra nonlinearity due to the joint constraints. (C) 2012 Elsevier Ltd. All rights reserved.
引用
收藏
页码:42 / 59
页数:18
相关论文
共 115 条
[1]   Nearly optimal control laws for nonlinear systems with saturating actuators using a neural network HJB approach [J].
Abu-Khalaf, M ;
Lewis, FL .
AUTOMATICA, 2005, 41 (05) :779-791
[2]   Discrete-time nonlinear HJB solution using approximate dynamic programming: Convergence proof [J].
Al-Tamimi, Asma ;
Lewis, Frank L. ;
Abu-Khalaf, Murad .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2008, 38 (04) :943-949
[3]   Model-free Q-learning designs for linear discrete-time zero-sum games with application to H-infinity control [J].
Al-Tamimi, Asma ;
Lewis, Frank L. ;
Abu-Khalaf, Murad .
AUTOMATICA, 2007, 43 (03) :473-481
[4]   Reinforcement learning in a rule-based navigator for robotic manipulators [J].
Althoefer, K ;
Krekelberg, B ;
Husmeier, D ;
Seneviratne, L .
NEUROCOMPUTING, 2001, 37 :51-70
[5]  
[Anonymous], IEEE T SYSTEMS MAN C
[6]  
[Anonymous], FIRA 2011 C ICAHRR T
[7]  
[Anonymous], HDB INTELLIGENT CONT
[8]  
[Anonymous], 2007, DYNAMIC PROGRAMMING
[9]  
[Anonymous], 1989, (Ph.D. thesis
[10]  
[Anonymous], OPERATIONS RES P 200