Grasp Pose Learning from Human Demonstration with Task Constraints

被引:0
作者
Yinghui Liu
Kun Qian
Xin Xu
Bo Zhou
Fang Fang
机构
[1] Southeast University,School of Automation
[2] Southeast University,The Key Laboratory of Measurement and Control of CSE, Ministry of Education
来源
Journal of Intelligent & Robotic Systems | 2022年 / 105卷
关键词
Learning from demonstration; Robot grasping; Grasp pose detection; Superquadric; Task constraints;
D O I
暂无
中图分类号
学科分类号
摘要
To learn grasp constraints from human demonstrations, we propose a method that combines data-driven grasp constraint learning and one-shot human demonstration of tasks. By presenting task constraints in a GMM-based gripper-independent form, the task constraints are learned from simulated data with self-labeled grasp quality scores. By observing a human demonstration of the task and a real-world object, the learned task constraint model can be utilized to infer both the unknown grasping task and the probability density distributions of the task constraints on the object point cloud. In addition, we extend the superquadric-based grasp estimation method for reproducing the grasping task with 2-finger grippers. The task constraints restrict the searching scope of the grasp pose, so the geometrically best grasp pose within the task-constrained regions can be obtained. The effectiveness of our methodology is verified in experiments with a UR5 robot with a 2-finger gripper.
引用
收藏
相关论文
共 50 条
  • [41] Policy Transformation for Learning from Demonstration
    Suay, Halit Bener
    Chernova, Sonia
    HRI'12: PROCEEDINGS OF THE SEVENTH ANNUAL ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2012, : 245 - 246
  • [42] Learning behavior fusion from demonstration
    Nicolescu, Monica
    Jenkins, Odest Chadwicke
    Olenderski, Adam
    Fritzinger, Eric
    INTERACTION STUDIES, 2008, 9 (02) : 319 - 352
  • [43] A survey of robot learning from demonstration
    Argall, Brenna D.
    Chernova, Sonia
    Veloso, Manuela
    Browning, Brett
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2009, 57 (05) : 469 - 483
  • [44] Learning Reactive Motion Policies in Multiple Task Spaces from Human Demonstrations
    Rana, M. Asif
    Li, Anqi
    Ravichandar, Harish
    Mukadam, Mustafa
    Chernova, Sonia
    Fox, Dieter
    Boots, Byron
    Ratliff, Nathan
    CONFERENCE ON ROBOT LEARNING, VOL 100, 2019, 100
  • [45] Development of a Mimic Robot: Learning from Human Demonstration to Manipulate a Coffee Maker as an Example
    Hwang, Pin-Jui
    Hsu, Chen-Chien
    Wang, Wei-Yen
    2019 IEEE 23RD INTERNATIONAL SYMPOSIUM ON CONSUMER TECHNOLOGIES (ISCT), 2019, : 124 - 127
  • [46] Inferring Task Goals and Constraints using Bayesian Nonparametric Inverse Reinforcement Learning
    Park, Daehyung
    Noseworthy, Michael
    Paul, Rohan
    Roy, Subhro
    Roy, Nicholas
    CONFERENCE ON ROBOT LEARNING, VOL 100, 2019, 100
  • [47] The effect of task constraints on the manipulation of visual information and the implications for the specificity of learning hypothesis
    Bennett, S
    Davids, K
    HUMAN MOVEMENT SCIENCE, 1997, 16 (04) : 379 - 390
  • [48] Learning Task Priorities From Demonstrations
    Silverio, Joao
    Calinon, Sylvain
    Rozo, Leonel
    Caldwell, Darwin G.
    IEEE TRANSACTIONS ON ROBOTICS, 2019, 35 (01) : 78 - 94
  • [49] Learning Pose Dynamical System for Contact Tasks under Human Interaction
    Yang, Shangshang
    Gao, Xiao
    Feng, Zhao
    Xiao, Xiaohui
    ACTUATORS, 2023, 12 (04)
  • [50] Robot life-long task learning from human demonstrations: a Bayesian approach
    Nathan Koenig
    Maja J. Matarić
    Autonomous Robots, 2017, 41 : 1173 - 1188