Grasp Pose Learning from Human Demonstration with Task Constraints

被引:0
|
作者
Yinghui Liu
Kun Qian
Xin Xu
Bo Zhou
Fang Fang
机构
[1] Southeast University,School of Automation
[2] Southeast University,The Key Laboratory of Measurement and Control of CSE, Ministry of Education
来源
Journal of Intelligent & Robotic Systems | 2022年 / 105卷
关键词
Learning from demonstration; Robot grasping; Grasp pose detection; Superquadric; Task constraints;
D O I
暂无
中图分类号
学科分类号
摘要
To learn grasp constraints from human demonstrations, we propose a method that combines data-driven grasp constraint learning and one-shot human demonstration of tasks. By presenting task constraints in a GMM-based gripper-independent form, the task constraints are learned from simulated data with self-labeled grasp quality scores. By observing a human demonstration of the task and a real-world object, the learned task constraint model can be utilized to infer both the unknown grasping task and the probability density distributions of the task constraints on the object point cloud. In addition, we extend the superquadric-based grasp estimation method for reproducing the grasping task with 2-finger grippers. The task constraints restrict the searching scope of the grasp pose, so the geometrically best grasp pose within the task-constrained regions can be obtained. The effectiveness of our methodology is verified in experiments with a UR5 robot with a 2-finger gripper.
引用
收藏
相关论文
共 50 条
  • [21] Learning from demonstration using products of experts: Applications to manipulation and task prioritization
    Pignat, Emmanuel
    Silverio, Joao
    Calinon, Sylvain
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2022, 41 (02) : 163 - 188
  • [22] Learning Articulated Constraints From a One-Shot Demonstration for Robot Manipulation Planning
    Liu, Yizhou
    Zha, Fusheng
    Sun, Lining
    Li, Jingxuan
    Li, Mantian
    Wang, Xin
    IEEE ACCESS, 2019, 7 : 172584 - 172596
  • [23] Segment, Compare, and Learn: Creating Movement Libraries of Complex Task for Learning from Demonstration
    Prados, Adrian
    Espinoza, Gonzalo
    Moreno, Luis
    Barber, Ramon
    BIOMIMETICS, 2025, 10 (01)
  • [24] ACNMP: Skill Transfer and Task Extrapolation through Learning from Demonstration and Reinforcement Learning via Representation Sharing
    Akbulut, M. Tuluhan
    Oztop, Erhan
    Seker, M. Yunus
    Xue, Honghu
    Tekden, Ahmet E.
    Ugur, Emre
    CONFERENCE ON ROBOT LEARNING, VOL 155, 2020, 155 : 1896 - 1907
  • [25] Task-Adaptive Robot Learning From Demonstration With Gaussian Process Models Under Replication
    Arduengo, Miguel
    Colome, Adria
    Borras, Julia
    Sentis, Luis
    Torras, Carme
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (02): : 966 - 973
  • [26] Task-Level Learning from Demonstration and Generation of Action Examples for Hierarchical Control Structure
    Gorbenko, Anna
    AEROSPACE AND MECHANICAL ENGINEERING, 2014, 565 : 194 - 197
  • [27] Environment-adaptive learning from demonstration for proactive assistance in human-robot collaborative tasks
    Qian, Kun
    Xu, Xin
    Liu, Huan
    Bai, Jishen
    Luo, Shan
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2022, 151
  • [28] Human and Robot Perception in Large-scale Learning from Demonstration
    Crick, Christopher
    Osentoski, Sarah
    Jay, Graylin
    Jenkins, Odest Chadwicke
    PROCEEDINGS OF THE 6TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTIONS (HRI 2011), 2011, : 339 - 346
  • [29] Heterogeneous Learning from Demonstration
    Paleja, Rohan
    Gombolay, Matthew
    HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2019, : 730 - 732
  • [30] A Formalism for Learning from Demonstration*
    Billing E.A.
    Hellström T.
    Paladyn, 2010, 1 (01): : 1 - 13