AnyGrasp: Robust and Efficient Grasp Perception in Spatial and Temporal Domains

被引:81
作者
Fang, Hao-Shu [1 ]
Wang, Chenxi [1 ]
Fang, Hongjie [1 ]
Gou, Minghao [1 ]
Liu, Jirong [1 ]
Yan, Hengxu [1 ]
Liu, Wenhai [1 ]
Xie, Yichen [1 ]
Lu, Cewu [1 ,2 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai 200240, Peoples R China
[2] Shanghai Artificial Intelligence Lab, Shanghai, Peoples R China
关键词
Terms-AnyGrasp; dynamic grasping; general grasping;
D O I
10.1109/TRO.2023.3281153
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
As the basis for prehensile manipulation, it is vital to enable robots to grasp as robustly as humans. In daily manipulation, our grasping system is prompt, accurate, flexible and continuous across spatial and temporal domains. Few existing methods cover all these properties for robot grasping. In this paper, we propose a new methodology for grasp perception to enable robots these abilities. Specifically, we develop a dense supervision strategy with real perception and analytic labels in the spatial-temporal domain. Additional awareness of objects' center-of-mass is incorporated into the learning process to help improve grasping stability. Utilization of grasp correspondence across observations enables dynamic grasp tracking. Our model, AnyGrasp, can generate accurate, full-DoF, dense and temporally-smooth grasp poses efficiently, and works robustly against large depth sensing noise. Embedded with AnyGrasp, we achieve a 93.3% success rate when clearing bins with over 300 unseen objects, which is comparable with human subjects under controlled conditions. Over 900 MPPH is reported on a single-arm system. For dynamic grasping, we demonstrate catching swimming robot fish in the water.
引用
收藏
页码:3929 / 3945
页数:17
相关论文
共 57 条
[1]   Dynamic Grasping with Reachability and Motion Awareness [J].
Akinola, Iretiayo ;
Xu, Jingxi ;
Song, Shuran ;
Allen, Peter K. .
2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, :9422-9429
[2]   Attentional landscapes in reaching and grasping [J].
Baldauf, Daniel ;
Deubel, Heiner .
VISION RESEARCH, 2010, 50 (11) :999-1013
[3]  
Bicchi A., 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065), P348, DOI 10.1109/ROBOT.2000.844081
[4]   Data-Driven Grasp Synthesis-A Survey [J].
Bohg, Jeannette ;
Morales, Antonio ;
Asfour, Tamim ;
Kragic, Danica .
IEEE TRANSACTIONS ON ROBOTICS, 2014, 30 (02) :289-309
[5]  
Bousmalis K, 2018, IEEE INT CONF ROBOT, P4243
[6]   Manipulation of unmodeled objects using intelligent grasping schemes [J].
Bowers, DL ;
Lumia, R .
IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2003, 11 (03) :320-330
[7]   DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs [J].
Chen, Liang-Chieh ;
Papandreou, George ;
Kokkinos, Iasonas ;
Murphy, Kevin ;
Yuille, Alan L. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) :834-848
[8]   Real-World Multiobject, Multigrasp Detection [J].
Chu, Fu-Jen ;
Xu, Ruinian ;
Vela, Patricio A. .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (04) :3355-3362
[9]  
Depierre A, 2018, IEEE INT C INT ROBOT, P3511, DOI 10.1109/IROS.2018.8593950
[10]  
Deubel H, 2004, ATTENTION IN ACTION: ADVANCES FROM COGNITIVE NEUROSCIENCE, P69