Keyframe-based Learning from Demonstration Method and Evaluation

被引:135
|
作者
Akgun, Baris [1 ]
Cakmak, Maya [1 ]
Jiang, Karl [1 ]
Thomaz, Andrea L. [1 ]
机构
[1] Georgia Inst Technol, Sch Interact Comp, Atlanta, GA 30332 USA
关键词
Learning from Demonstration; Kinesthetic teaching; Human-Robot Interaction; Humanoid robotics; IMITATION; TASK;
D O I
10.1007/s12369-012-0160-0
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
We present a framework for learning skills from novel types of demonstrations that have been shown to be desirable from a Human-Robot Interaction perspective. Our approach-Keyframe-based Learning from Demonstration (KLfD)-takes demonstrations that consist of keyframes; a sparse set of points in the state space that produces the intended skill when visited in sequence. The conventional type of trajectory demonstrations or a hybrid of the two are also handled by KLfD through a conversion to keyframes. Our method produces a skill model that consists of an ordered set of keyframe clusters, which we call Sequential Pose Distributions (SPD). The skill is reproduced by splining between clusters. We present results from two domains: mouse gestures in 2D and scooping, pouring and placing skills on a humanoid robot. KLfD has performance similar to existing LfD techniques when applied to conventional trajectory demonstrations. Additionally, we demonstrate that KLfD may be preferable when demonstration type is suited for the skill.
引用
收藏
页码:343 / 355
页数:13
相关论文
共 50 条
  • [21] Learning Articulated Constraints From a One-Shot Demonstration for Robot Manipulation Planning
    Liu, Yizhou
    Zha, Fusheng
    Sun, Lining
    Li, Jingxuan
    Li, Mantian
    Wang, Xin
    IEEE ACCESS, 2019, 7 : 172584 - 172596
  • [22] Robot learning from demonstration for path planning: A review
    Xie, ZongWu
    Zhang, Qi
    Jiang, ZaiNan
    Liu, Hong
    SCIENCE CHINA-TECHNOLOGICAL SCIENCES, 2020, 63 (08) : 1325 - 1334
  • [23] Disentangled Relational Representations for Explaining and Learning from Demonstration
    Hristov, Yordan
    Angelov, Daniel
    Burke, Michael
    Lascarides, Alex
    Ramamoorthy, Subramanian
    CONFERENCE ON ROBOT LEARNING, VOL 100, 2019, 100
  • [24] Priming as a Means to Reduce Ambiguity in Learning from Demonstration
    Fonooni, Benjamin
    Hellstrom, Thomas
    Janlert, Lars-Erik
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2016, 8 (01) : 5 - 19
  • [25] Adaptive motion planning framework by learning from demonstration
    Li, Xiao
    Cheng, Hongtai
    Liang, Xiaoxiao
    INDUSTRIAL ROBOT-THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH AND APPLICATION, 2019, 46 (04): : 541 - 552
  • [26] Predictive Learning from Demonstration
    Billing, Erik A.
    Hellstrom, Thomas
    Janlert, Lars-Erik
    AGENTS AND ARTIFICIAL INTELLIGENCE, 2011, 129 : 186 - 200
  • [27] Quantifying teaching behavior in robot learning from demonstration
    Sena, Aran
    Howard, Matthew
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2020, 39 (01) : 54 - 72
  • [28] A Formalism for Learning from Demonstration*
    Billing E.A.
    Hellström T.
    Paladyn, 2010, 1 (01): : 1 - 13
  • [29] Investigating Learning from Demonstration in Imperfect and RealWorld Scenarios
    Hedlund-Botti, Erin
    Gombolay, Matthew C.
    COMPANION OF THE ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, HRI 2023, 2023, : 769 - 771
  • [30] Humanoid Robot Learning by Demonstration based on Visual Bootstrapping technique
    Sander, Marcos
    Aguirre, Andres
    Benavides, Facundo
    PROCEEDINGS OF THE 2016 XLII LATIN AMERICAN COMPUTING CONFERENCE (CLEI), 2016,