Semantic learning from keyframe demonstration using object attribute constraints

被引:0
|
作者
Sen, Busra [1 ]
Elfring, Jos [1 ]
Torta, Elena [1 ]
van de Molengraft, Rene [1 ]
机构
[1] Eindhoven Univ Technol, Dept Mech Engn, Eindhoven, Netherlands
来源
FRONTIERS IN ROBOTICS AND AI | 2024年 / 11卷
基金
英国科研创新办公室;
关键词
learning from demonstration; keyframe demonstrations; object attributes; task goal learning; semantic learning; ROBOT; REPRESENTATIONS;
D O I
10.3389/frobt.2024.1340334
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Learning from demonstration is an approach that allows users to personalize a robot's tasks. While demonstrations often focus on conveying the robot's motion or task plans, they can also communicate user intentions through object attributes in manipulation tasks. For instance, users might want to teach a robot to sort fruits and vegetables into separate boxes or to place cups next to plates of matching colors. This paper introduces a novel method that enables robots to learn the semantics of user demonstrations, with a particular emphasis on the relationships between object attributes. In our approach, users demonstrate essential task steps by manually guiding the robot through the necessary sequence of poses. We reduce the amount of data by utilizing only robot poses instead of trajectories, allowing us to focus on the task's goals, specifically the objects related to these goals. At each step, known as a keyframe, we record the end-effector pose, object poses, and object attributes. However, the number of keyframes saved in each demonstration can vary due to the user's decisions. This variability in each demonstration can lead to inconsistencies in the significance of keyframes, complicating keyframe alignment to generalize the robot's motion and the user's intention. Our method addresses this issue by focusing on teaching the higher-level goals of the task using only the required keyframes and relevant objects. It aims to teach the rationale behind object selection for a task and generalize this reasoning to environments with previously unseen objects. We validate our proposed method by conducting three manipulation tasks aiming at different object attribute constraints. In the reproduction phase, we demonstrate that even when the robot encounters previously unseen objects, it can generalize the user's intention and execute the task.
引用
收藏
页数:23
相关论文
共 50 条
  • [21] Learning Goal Conditioned Socially Compliant Navigation From Demonstration Using Risk-Based Features
    Konar, Abhisek
    Baghi, Bobak H.
    Dudek, Gregory
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (02) : 651 - 658
  • [22] iART: Learning From Demonstration for Assisted Robotic Therapy Using LSTM
    Pareek, Shrey
    Kesavadas, Thenkurussi
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (02): : 477 - 484
  • [23] Learning from Demonstration for Autonomous Navigation in Complex Unstructured Terrain
    Silver, David
    Bagnell, J. Andrew
    Stentz, Anthony
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2010, 29 (12) : 1565 - 1592
  • [24] Bagging for Gaussian mixture regression in robot learning from demonstration
    Ye, Congcong
    Yang, Jixiang
    Ding, Han
    JOURNAL OF INTELLIGENT MANUFACTURING, 2022, 33 (03) : 867 - 879
  • [25] Bagging for Gaussian mixture regression in robot learning from demonstration
    Congcong Ye
    Jixiang Yang
    Han Ding
    Journal of Intelligent Manufacturing, 2022, 33 : 867 - 879
  • [26] New CNN and hybrid CNN-LSTM models for learning object manipulation of humanoid robots from demonstration
    Simge Nur Aslan
    Recep Özalp
    Ayşegül Uçar
    Cüneyt Güzeliş
    Cluster Computing, 2022, 25 : 1575 - 1590
  • [27] Visual object-action recognition: Inferring object affordances from human demonstration
    Kjellstrom, Hedvig
    Romero, Javier
    Kragic, Danica
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2011, 115 (01) : 81 - 90
  • [28] A survey of robot learning from demonstration
    Argall, Brenna D.
    Chernova, Sonia
    Veloso, Manuela
    Browning, Brett
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2009, 57 (05) : 469 - 483
  • [29] Learning behavior fusion from demonstration
    Nicolescu, Monica
    Jenkins, Odest Chadwicke
    Olenderski, Adam
    Fritzinger, Eric
    INTERACTION STUDIES, 2008, 9 (02) : 319 - 352
  • [30] Learning motion primitives from demonstration
    Chi, Mingshan
    Yao, Yufeng
    Liu, Yaxin
    Teng, Yiqian
    Zhong, Ming
    ADVANCES IN MECHANICAL ENGINEERING, 2017, 9 (12)