Learning from observation paradigm: Leg task models for enabling a biped humanoid robot to imitate human dances

被引:95
作者
Nakaoka, Shin'ichiro
Nakazawa, Atsushi
Kanehiro, Fumio
Kaneko, Kenji
Morisawa, Mitsuharu
Hirukawa, Hirohisa
Ikeuchi, Katsushi
机构
[1] Univ Tokyo, Inst Ind Sci, Meguro Ku, Tokyo 1538505, Japan
[2] Osaka Univ, Cybermedia Ctr, Osaka 5600043, Japan
[3] Natl Inst Adv Ind Sci & Technol, Intelligent Syst Res Inst, Tsukuba, Ibaraki 3058568, Japan
关键词
learning from observation; imitation; biped humanoid robot; motion capture; entertainment robotics;
D O I
10.1177/0278364907079430
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
This paper proposes a framework that achieves the Learning from Observation paradigm for learning dance motions. The framework enables a humanoid robot to imitate dance motions captured from human demonstrations. This study especially focuses on leg motions to achieve a novel attempt in which a biped-ope robot imitates not only upper body motions but also leg motions including steps. Body differences between the robot and the original dancer make the problem difficult because the differences prevent the robot front straight forward-v following the original motions and they also change dynamic body balance. We propose leg task models, which play a key role in solving the problem. Low-level tasks in leg motion are modelled so that they clearly provide essential information required,for keeping dynamic stability and important motion characteristics. The models divide the problem of adapting motions into the problem of recognizing a sequence of the tasks and the problem of executing the task sequence. We have developed a method for recognizing the tasks from captured motion data and a method for generating the motions of the tasks that can be executed by existing robots including HRP-2. HRP-2 successfully performed the generated motions, which imitated a traditional folk dance performed by human dancers.
引用
收藏
页码:829 / 844
页数:16
相关论文
共 23 条
[1]  
[Anonymous], 2000, AAAI CMU WORKSHOP IN
[2]  
[Anonymous], 1999, J ROBOT SOC JAPAN
[3]  
Ikemoto Leslie, 2006, P 2006 S INT 3D GRAP, P49, DOI DOI 10.1145/1111411.1111420
[4]   TOWARD AN ASSEMBLY PLAN FROM OBSERVATION .1. TASK RECOGNITION WITH POLYHEDRAL OBJECTS [J].
IKEUCHI, K ;
SUEHIRO, T .
IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, 1994, 10 (03) :368-385
[5]  
KAGAMI S, 2000, P 4 INT WORKSH ALG F
[6]   Biped walking pattern generation by using preview control of zero-moment point [J].
Kajita, S ;
Kanehiro, F ;
Kaneko, K ;
Fujiwara, K ;
Harada, K ;
Yokoi, K ;
Hirukawa, H .
2003 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-3, PROCEEDINGS, 2003, :1620-1626
[7]   OpenHRP: Open architecture humanoid robotics platform [J].
Kanehiro, F ;
Hirukawa, H ;
Kajita, S .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2004, 23 (02) :155-165
[8]   Humanoid robot HRP-2 [J].
Kaneko, K ;
Kanehiro, F ;
Kajita, S ;
Hirukawa, H ;
Kawasaki, T ;
Hirata, M ;
Akachi, K ;
Isozumi, T .
2004 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1- 5, PROCEEDINGS, 2004, :1083-1090
[9]  
Kang SB, 1997, IEEE T ROBOTIC AUTOM, V13, P81, DOI 10.1109/70.554349
[10]  
Kosuge K, 2003, IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, P3459