Learning Continuous Grasping Function With a Dexterous Hand From Human Demonstrations

被引:25
作者
Ye, Jianglong [1 ]
Wang, Jiashun [2 ]
Huang, Binghao [1 ]
Qin, Yuzhe [1 ]
Wang, Xiaolong [1 ]
机构
[1] Univ Calif San Diego, La Jolla, CA 92093 USA
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
关键词
Grasping; Robots; Trajectory; Training; Robot kinematics; Three-dimensional displays; Planning; Learning from demonstration; dexterous manipulation; deep learning in grasping and manipulation; MANIPULATION;
D O I
10.1109/LRA.2023.3261745
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
We propose to learn to generate grasping motion for manipulation with a dexterous hand using implicit functions. With continuous time inputs, the model can generate a continuous and smooth grasping plan. We name the proposed model Continuous Grasping Function (CGF). CGF is learned via generative modeling with a Conditional Variational Autoencoder using 3D human demonstrations. We will first convert the large-scale human-object interaction trajectories to robot demonstrations via motion retargeting, and then use these demonstrations to train CGF. During inference, we perform sampling with CGF to generate different grasping plans in the simulator and select the successful ones to transfer to the real robot. By training on diverse human data, our CGF allows generalization to manipulate multiple objects. Compared to previous planning algorithms, CGF is more efficient and achieves significant improvement on success rate when transferred to grasping with the real Allegro Hand.
引用
收藏
页码:2882 / 2889
页数:8
相关论文
共 55 条
[1]   Vision-Only Robot Navigation in a Neural Radiance World [J].
Adamkiewicz, Michal ;
Chen, Timothy ;
Caccavale, Adam ;
Gardner, Rachel ;
Culbertson, Preston ;
Bohg, Jeannette ;
Schwager, Mac .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (02) :4606-4613
[2]  
Akkaya I, 2019, Arxiv, DOI arXiv:1910.07113
[3]   Goal directed multi-finger manipulation: Control policies and analysis [J].
Andrews, S. ;
Kry, P. G. .
COMPUTERS & GRAPHICS-UK, 2013, 37 (07) :830-839
[4]   Learning dexterous in-hand manipulation [J].
Andrychowicz, Marcin ;
Baker, Bowen ;
Chociej, Maciek ;
Jozefowicz, Rafal ;
McGrew, Bob ;
Pachocki, Jakub ;
Petron, Arthur ;
Plappert, Matthias ;
Powell, Glenn ;
Ray, Alex ;
Schneider, Jonas ;
Sidor, Szymon ;
Tobin, Josh ;
Welinder, Peter ;
Weng, Lilian ;
Zaremba, Wojciech .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2020, 39 (01) :3-20
[5]  
Brahmbhatt S, 2019, IEEE INT C INT ROBOT, P2386, DOI [10.1109/IROS40897.2019.8967960, 10.1109/iros40897.2019.8967960]
[6]  
Calli B, 2015, PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS (ICAR), P510, DOI 10.1109/ICAR.2015.7251504
[7]  
Calli Berk., 2018, INT S EXP ROB, P437
[8]   DexYCB: A Benchmark for Capturing Hand Grasping of Objects [J].
Chao, Yu-Wei ;
Yang, Wei ;
Xiang, Yu ;
Molchanov, Pavlo ;
Handa, Ankur ;
Tremblay, Jonathan ;
Narang, Yashraj S. ;
Van Wyk, Karl ;
Iqbal, Umar ;
Birchfield, Stan ;
Kautz, Jan ;
Fox, Dieter .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :9040-9049
[9]  
Chen T, 2021, PR MACH LEARN RES, V164, P297
[10]   Learning Continuous Image Representation with Local Implicit Image Function [J].
Chen, Yinbo ;
Liu, Sifei ;
Wang, Xiaolong .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :8624-8634