Grasp Pose Learning from Human Demonstration with Task Constraints

被引:0
|
作者
Yinghui Liu
Kun Qian
Xin Xu
Bo Zhou
Fang Fang
机构
[1] Southeast University,School of Automation
[2] Southeast University,The Key Laboratory of Measurement and Control of CSE, Ministry of Education
来源
Journal of Intelligent & Robotic Systems | 2022年 / 105卷
关键词
Learning from demonstration; Robot grasping; Grasp pose detection; Superquadric; Task constraints;
D O I
暂无
中图分类号
学科分类号
摘要
To learn grasp constraints from human demonstrations, we propose a method that combines data-driven grasp constraint learning and one-shot human demonstration of tasks. By presenting task constraints in a GMM-based gripper-independent form, the task constraints are learned from simulated data with self-labeled grasp quality scores. By observing a human demonstration of the task and a real-world object, the learned task constraint model can be utilized to infer both the unknown grasping task and the probability density distributions of the task constraints on the object point cloud. In addition, we extend the superquadric-based grasp estimation method for reproducing the grasping task with 2-finger grippers. The task constraints restrict the searching scope of the grasp pose, so the geometrically best grasp pose within the task-constrained regions can be obtained. The effectiveness of our methodology is verified in experiments with a UR5 robot with a 2-finger gripper.
引用
收藏
相关论文
共 50 条
  • [1] Grasp Pose Learning from Human Demonstration with Task Constraints
    Liu, Yinghui
    Qian, Kun
    Xu, Xin
    Zhou, Bo
    Fang, Fang
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2022, 105 (02)
  • [2] Recognizing the grasp intention from human demonstration
    de Souza, Ravin
    El-Khoury, Sahar
    Santos-Victor, Jose
    Billard, Aude
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2015, 74 : 108 - 121
  • [3] Learning from Demonstration Facilitates Human-Robot Collaborative Task Execution
    Koskinopoulou, Maria
    Piperakis, Stylimos
    Frahanias, Panos
    ELEVENTH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN ROBOT INTERACTION (HRI'16), 2016, : 59 - 66
  • [4] Learning From Demonstration Based on Environmental Constraints
    Li, Xing
    Brock, Oliver
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (04) : 10938 - 10945
  • [5] Robot Grasp Planning: A Learning from Demonstration-Based Approach
    Wang, Kaimeng
    Fan, Yongxiang
    Sakuma, Ichiro
    SENSORS, 2024, 24 (02)
  • [6] Grasp Pose Detection with Affordance-based Task Constraint Learning in Single-view Point Clouds
    Kun Qian
    Xingshuo Jing
    Yanhui Duan
    Bo Zhou
    Fang Fang
    Jing Xia
    Xudong Ma
    Journal of Intelligent & Robotic Systems, 2020, 100 : 145 - 163
  • [7] Grasp Pose Detection with Affordance-based Task Constraint Learning in Single-view Point Clouds
    Qian, Kun
    Jing, Xingshuo
    Duan, Yanhui
    Zhou, Bo
    Fang, Fang
    Xia, Jing
    Ma, Xudong
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2020, 100 (01) : 145 - 163
  • [8] Learning Partial Ordering Constraints from a Single Demonstration
    Mohseni-Kabir, Anahita
    Rich, Charles
    Chernova, Sonia
    HRI'14: PROCEEDINGS OF THE 2014 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2014, : 248 - 249
  • [9] Procedural Memory Learning from Demonstration for Task Performance
    Yoo, Yong-Ho
    Kim, Jong-Hwan
    2015 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC 2015): BIG DATA ANALYTICS FOR HUMAN-CENTRIC SYSTEMS, 2015, : 2435 - 2440
  • [10] Interactive Hierarchical Task Learning from a Single Demonstration
    Mohseni-Kabir, Anahita
    Rich, Charles
    Chernova, Sonia
    Sidner, Candace L.
    Miller, Daniel
    PROCEEDINGS OF THE 2015 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI'15), 2015, : 205 - 212