Learning to control in operational space

被引:102
作者
Peters, Jan [1 ,2 ]
Schaal, Steffan [2 ,3 ]
机构
[1] Univ Tubingen, Max Planck Inst Biol Cybernet, D-72076 Tubingen, Germany
[2] Univ So Calif, Los Angeles, CA 90089 USA
[3] ATR Computat Neurosci Lab, Kyoto 6190288, Japan
关键词
operational space control; robot learning; reinforcement learning; reward-weighted regression;
D O I
10.1177/0278364907087548
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
One of the most general frameworks for phrasing control problems for complex, redundant robots is operational-space control. However, while this framework is of essential importance for robotics and well understood from an analytical point of view, it can be prohibitively hard to achieve accurate control in the face of modeling errors, which are inevitable in complex robots (e.g. humanoid robots). In this paper, we suggest a learning approach for operational-space control as a direct inverse model learning problem. A first important insight for this paper is that a physically correct solution to the inverse problem with redundant degrees of freedom doer exist when learning of the inverse map is performed in a suitable piecewise linear way. The second crucial component of our work is based on the insight that many operational-space controllers can be understood in terms of a constrained optimal control problem. The cost,function associated with this optimal control problem allows us to formulate a learning algorithm that automatically synthesizes a globally consistent desired resolution of redundancy while learning the operational-space controller. From the machine learning point of view, this learning problem corresponds to a reinforcement learning problem that maximizes an immediate reward. We employ an expectation-maximization policy search algorithm in order to solve this problem. Evaluations on a three degrees-of-freedom robot arm are used to illustrate the suggested approach. The application to a physically realistic simulator of the anthropomorphic SARCOS Master arm demonstrates feasibility, for complex high degree-of-freedom robots. We also show that the proposed method works in the setting of learning resolved motion rate control on a real, physical Mitsubishi PA-10 medical robotics arm.
引用
收藏
页码:197 / 212
页数:16
相关论文
共 41 条
[1]  
[Anonymous], 2003, GEN INVERSES THEORY
[2]  
Atkeson CG, 1997, ARTIF INTELL REV, V11, P75, DOI 10.1023/A:1006511328852
[3]  
Atkeson CG, 1997, ARTIF INTELL REV, V11, P11, DOI 10.1023/A:1006559212014
[4]  
Bruyninckx H., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065), P2563, DOI 10.1109/ROBOT.2000.846414
[5]   A SELF-ORGANIZING NEURAL MODEL OF MOTOR EQUIVALENT REACHING AND TOOL USE BY A MULTIJOINT ARM [J].
BULLOCK, D ;
GROSSBERG, S ;
GUENTHER, FH .
JOURNAL OF COGNITIVE NEUROSCIENCE, 1993, 5 (04) :408-435
[6]  
D'Souza A., 2001, P IEEE RSJ INT C INT
[7]   Using expectation-maximization for reinforcement learning [J].
Dayan, P ;
Hinton, GE .
NEURAL COMPUTATION, 1997, 9 (02) :271-278
[8]  
DELUCA A, 1991, P IEEE INT C ROB AUT
[9]   A THEORY OF GENERALIZED INVERSES APPLIED TO ROBOTICS [J].
DOTY, KL ;
MELCHIORRI, C ;
BONIVENTO, C .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 1993, 12 (01) :1-19
[10]  
Farrell J. A., 2006, ADAPTIVE APPROXIMATI