Grasp Pose Detection in Point Clouds

被引:351
作者
ten Pas, Andreas [1 ]
Gualtieri, Marcus [1 ]
Saenko, Kate [2 ]
Platt, Robert [1 ]
机构
[1] Northeastern Univ, 360 Huntington Ave, Boston, MA 02115 USA
[2] Boston Univ, Boston, MA 02215 USA
基金
美国国家科学基金会;
关键词
grasping; manipulation; perception; grasp detection;
D O I
10.1177/0278364917735594
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Recently, a number of grasp detection methods have been proposed that can be used to localize robotic grasp configurations directly from sensor data without estimating object pose. The underlying idea is to treat grasp perception analogously to object detection in computer vision. These methods take as input a noisy and partially occluded RGBD image or point cloud and produce as output pose estimates of viable grasps, without assuming a known CAD model of the object. Although these methods generalize grasp knowledge to new objects well, they have not yet been demonstrated to be reliable enough for wide use. Many grasp detection methods achieve grasp success rates (grasp successes as a fraction of the total number of grasp attempts) between 75% and 95% for novel objects presented in isolation or in light clutter. Not only are these success rates too low for practical grasping applications, but the light clutter scenarios that are evaluated often do not reflect the realities of real-world grasping. This paper proposes a number of innovations that together result in an improvement in grasp detection performance. The specific improvement in performance due to each of our contributions is quantitatively measured either in simulation or on robotic hardware. Ultimately, we report a series of robotic experiments that average a 93% end-to-end grasp success rate for novel objects presented in dense clutter.
引用
收藏
页码:1455 / 1473
页数:19
相关论文
共 34 条
  • [1] [Anonymous], 2017, GPD WEBSITE
  • [2] Mobile Manipulation in Unstructured Environments Perception, Planning, and Execution
    Chitta, Sachin
    Jones, E. Gil
    Ciocarlie, Matei
    Hsiao, Kaijen
    [J]. IEEE ROBOTICS & AUTOMATION MAGAZINE, 2012, 19 (02) : 58 - 71
  • [3] Detry R, 2013, IEEE INT CONF ROBOT, P601, DOI 10.1109/ICRA.2013.6630635
  • [4] Diankov R., 2010, THESIS
  • [5] Fischinger D, 2013, IEEE INT CONF ROBOT, P609, DOI 10.1109/ICRA.2013.6630636
  • [6] Fischinger D, 2012, IEEE INT C INT ROBOT, P2051, DOI 10.1109/IROS.2012.6386137
  • [7] Girshick R., 2014, IEEE C COMP VIS PATT, DOI [DOI 10.1109/CVPR.2014.81, 10.1109/CVPR.2014.81]
  • [8] Glover J, 2013, IEEE INT C INT ROBOT, P2158, DOI 10.1109/IROS.2013.6696658
  • [9] Gualtieri M, 2016, 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), P598, DOI 10.1109/IROS.2016.7759114
  • [10] Hertkorn K, 2013, IEEE INT C INT ROBOT, P2074, DOI 10.1109/IROS.2013.6696637