kPAM 2.0: Feedback Control for Category-Level Robotic Manipulation

被引:40
作者
Gao, Wei [1 ]
Tedrake, Russ [1 ]
机构
[1] MIT, CASIL, 77 Massachusetts Ave, Cambridge, MA 02139 USA
基金
美国国家科学基金会;
关键词
Robots; Task analysis; Robot kinematics; Shape; Three-dimensional displays; Service robots; Grasping; Dexterous manipulation; generalizable robotic manipulation; perception for grasping and manipulation;
D O I
10.1109/LRA.2021.3062315
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
In this letter, we explore generalizable, perception-to-action robotic manipulation for precise, contact-rich tasks. In particular, we contribute a framework for closed-loop robotic manipulation that automatically handles a category of objects, despite potentially unseen object instances and significant intra-category variations in shape, size and appearance. Previous approaches typically build a feedback loop on top of a realtime 6-DOF pose estimator. However, representing an object with a parameterized transformation from a fixed geometric template does not capture large intra-category shape variation. Hence we adopt the keypoint-based object representation proposed in [13] for category-level pick-and-place, and extend it to closed-loop manipulation policies with contact-rich tasks. We first augment keypoints with local orientation information. Using the oriented keypoints, we propose a novel object-centric action representation in terms of regulating the linear/angular velocity or force/torque of these oriented keypoints. This formulation is surprisingly versatile - we demonstrate that it can accomplish contact-rich manipulation tasks that require precision and dexterity for a category of objects with different shapes, sizes and appearances, such as peg-hole insertion for pegs and holes with significant shape variation and tight clearance. With the proposed object and action representation, our framework is also agnostic to the robot grasp pose and initial object configuration, making it flexible for integration and deployment. Video demonstration, source code and supplemental materials are available on https://sites.google.com/view/kpam2/home.
引用
收藏
页码:2962 / 2969
页数:8
相关论文
共 29 条
[1]  
Amanhoud W, 2019, ROBOTICS: SCIENCE AND SYSTEMS XV
[2]   Learning dexterous in-hand manipulation [J].
Andrychowicz, Marcin ;
Baker, Bowen ;
Chociej, Maciek ;
Jozefowicz, Rafal ;
McGrew, Bob ;
Pachocki, Jakub ;
Petron, Arthur ;
Plappert, Matthias ;
Powell, Glenn ;
Ray, Alex ;
Schneider, Jonas ;
Sidor, Szymon ;
Tobin, Josh ;
Welinder, Peter ;
Weng, Lilian ;
Zaremba, Wojciech .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2020, 39 (01) :3-20
[3]  
Diankov R., 2010, Automated construction of robotic manipulation programs
[4]  
Finn C, 2016, IEEE INT CONF ROBOT, P512, DOI 10.1109/ICRA.2016.7487173
[5]   Self-Supervised Correspondence in Visuomotor Policy Learning [J].
Florence, Peter ;
Manuelli, Lucas ;
Tedrake, Russ .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (02) :492-499
[6]  
Gao W., 2019, ARXIV190906980
[7]  
Holladay R, 2019, IEEE INT C INT ROBOT, P7409, DOI [10.1109/iros40897.2019.8967889, 10.1109/IROS40897.2019.8967889]
[8]   Real-time perception meets reactive motion generation [J].
Kappler, Daniel ;
Meier, Franziska ;
Issac, Jan ;
Mainprice, Jim ;
Cifuentes, Cristina Garcia ;
Wuthrich, Manuel ;
Berenz, Vincent ;
Schaal, Stefan ;
Ratliff, Nathan ;
Bohg, Jeannette .
IEEE Robotics and Automation Letters, 2018, 3 (03) :1864-1871
[9]  
Kumar V, 2016, IEEE INT CONF ROBOT, P378, DOI 10.1109/ICRA.2016.7487156
[10]  
Levine S, 2016, J MACH LEARN RES, V17