Deep Learning Method for Grasping Novel Objects Using Dexterous Hands

被引:16
作者
Shang, Weiwei [1 ]
Song, Fangjing [1 ]
Zhao, Zengzhi [1 ]
Gao, Hongbo [1 ]
Cong, Shuang [1 ]
Li, Zhijun [1 ]
机构
[1] Univ Sci & Technol China, Dept Automat, Hefei 230027, Peoples R China
基金
中国国家自然科学基金;
关键词
Grasping; Indexes; Robots; Grippers; Force; Machine learning; Task analysis; Deep learning; dexterous hand; grasp posture; novel object; robotic grasp;
D O I
10.1109/TCYB.2020.3022175
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Robotic grasping ability lags far behind human skills and poses a significant challenge in the robotics research area. According to the grasping part of an object, humans can select the appropriate grasping postures of their fingers. When humans grasp the same part of an object, different poses of the palm will cause them to select different grasping postures. Inspired by these human skills, in this article, we propose new grasping posture prediction networks (GPPNs) with multiple inputs, which acquire information from the object image and the palm pose of the dexterous hand to predict appropriate grasping postures. The GPPNs are further combined with grasping rectangle detection networks (GRDNs) to construct multilevel convolutional neural networks (ML-CNNs). In this study, a force-closure index was designed to analyze the grasping quality, and force-closure grasping postures were generated in the GraspIt! environment. Depth images of objects were captured in the Gazebo environment to construct the dataset for the GPPNs. Herein, we describe simulation experiments conducted in the GraspIt! environment, and present our study of the influences of the image input and the palm pose input on the GPPNs using a variable-controlling approach. In addition, the ML-CNNs were compared with the existing grasp detection methods. The simulation results verify that the ML-CNNs have a high grasping quality. The grasping experiments were implemented on the Shadow hand platform, and the results show that the ML-CNNs can accurately complete grasping of novel objects with good performance.
引用
收藏
页码:2750 / 2762
页数:13
相关论文
共 31 条
  • [1] [Anonymous], Cornell grasping dataset
  • [2] Data-Driven Grasp Synthesis-A Survey
    Bohg, Jeannette
    Morales, Antonio
    Asfour, Tamim
    Kragic, Danica
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2014, 30 (02) : 289 - 309
  • [3] Hand Posture Subspaces for Dexterous Robotic Grasping
    Ciocarlie, Matei T.
    Allen, Peter K.
    [J]. INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2009, 28 (07) : 851 - 867
  • [4] Dragiev S, 2011, IEEE INT CONF ROBOT
  • [5] FERRARI C, 1992, 1992 IEEE INTERNATIONAL CONF ON ROBOTICS AND AUTOMATION : PROCEEDINGS, VOLS 1-3, P2290, DOI 10.1109/ROBOT.1992.219918
  • [6] Learning Grasps in a Synergy-based Framework
    Ficuciello, Fanny
    Zaccara, Damiano
    Siciliano, Bruno
    [J]. 2016 INTERNATIONAL SYMPOSIUM ON EXPERIMENTAL ROBOTICS, 2017, 1 : 125 - 135
  • [7] Jiang Y, 2011, IEEE INT CONF ROBOT
  • [8] Johns E, 2016, 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), P4461, DOI 10.1109/IROS.2016.7759657
  • [9] Kappler D, 2015, IEEE INT CONF ROBOT, P4304, DOI 10.1109/ICRA.2015.7139793
  • [10] The KIT object models database: An object model database for object recognition, localization and manipulation in service robotics
    Kasper, Alexander
    Xue, Zhixing
    Dillmann, Ruediger
    [J]. INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2012, 31 (08) : 927 - 934