Learning Object Orientation Constraints and Guiding Constraints for Narrow Passages from One Demonstration

被引:3
|
作者
Li, Changshuo [1 ]
Berenson, Dmitry [2 ]
机构
[1] Worcester Polytech Inst, Worcester, MA 01609 USA
[2] Univ Michigan, Ann Arbor, MI 48109 USA
来源
2016 INTERNATIONAL SYMPOSIUM ON EXPERIMENTAL ROBOTICS | 2017年 / 1卷
关键词
Learning from demonstration; Constraints learning; Manipulation planning; MANIPULATION; TASK;
D O I
10.1007/978-3-319-50115-4_18
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Narrow passages and orientation constraints are very common in manipulation tasks and sampling-based planning methods can be quite time-consuming in such scenarios. We propose a method that can learn object orientation constraints and guiding constraints, represented as Task Space Regions, from a single human demonstrations by analyzing the geometry around the demonstrated trajectory. The key idea of our method is to explore the area around the demonstration trajectory through sampling in task space, and to learn constraints by segmenting and analyzing the feasible samples. Our method is tested on a tirechanging scenario which includes four sub-tasks and on a cup-retrieving task. Our results show that our method can produce plans for all these tasks in less than 3min with 50/50 successful trials for all tasks, while baseline methods only succeed 1 out of 50 times in 30min for one of the tasks. The results also show that our method can perform similar tasks with additional obstacles, transfer to similar tasks with different start and/or goal poses, and be used for real-world tasks with a PR2 robot.
引用
收藏
页码:197 / 210
页数:14
相关论文
共 15 条
  • [1] Semantic learning from keyframe demonstration using object attribute constraints
    Sen, Busra
    Elfring, Jos
    Torta, Elena
    van de Molengraft, Rene
    FRONTIERS IN ROBOTICS AND AI, 2024, 11
  • [2] Learning Articulated Constraints From a One-Shot Demonstration for Robot Manipulation Planning
    Liu, Yizhou
    Zha, Fusheng
    Sun, Lining
    Li, Jingxuan
    Li, Mantian
    Wang, Xin
    IEEE ACCESS, 2019, 7 : 172584 - 172596
  • [3] Learning From Demonstration Based on Environmental Constraints
    Li, Xing
    Brock, Oliver
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (04) : 10938 - 10945
  • [4] Learning Partial Ordering Constraints from a Single Demonstration
    Mohseni-Kabir, Anahita
    Rich, Charles
    Chernova, Sonia
    HRI'14: PROCEEDINGS OF THE 2014 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2014, : 248 - 249
  • [5] Grasp Pose Learning from Human Demonstration with Task Constraints
    Liu, Yinghui
    Qian, Kun
    Xu, Xin
    Zhou, Bo
    Fang, Fang
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2022, 105 (02)
  • [6] Grasp Pose Learning from Human Demonstration with Task Constraints
    Yinghui Liu
    Kun Qian
    Xin Xu
    Bo Zhou
    Fang Fang
    Journal of Intelligent & Robotic Systems, 2022, 105
  • [7] Category learning from equivalence constraints
    Hammer, Rubi
    Hertz, Tomer
    Hochstein, Shaul
    Weinshall, Daphna
    COGNITIVE PROCESSING, 2009, 10 (03) : 211 - 232
  • [8] Learning Parametric Constraints in High Dimensions from Demonstrations
    Chou, Glen
    Ozay, Necmiye
    Berenson, Dmitry
    CONFERENCE ON ROBOT LEARNING, VOL 100, 2019, 100
  • [9] Learning constraints from demonstrations with grid and parametric representations
    Chou, Glen
    Berenson, Dmitry
    Ozay, Necmiye
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2021, 40 (10-11) : 1255 - 1283
  • [10] Nonprehensile Manipulation for Rapid Object Spinning via Multisensory Learning from Demonstration
    Shin, Ku Jin
    Jeon, Soo
    SENSORS, 2024, 24 (02)