Self-Supervised Correspondence in Visuomotor Policy Learning

被引:89
作者
Florence, Peter [1 ]
Manuelli, Lucas [1 ]
Tedrake, Russ [1 ]
机构
[1] MIT, Comp Sci & Artificial Intelligence Lab, Cambridge, MA 02139 USA
基金
美国国家科学基金会;
关键词
Deep learning in robotics and automation; perception for grasping and manipulation; visual learning; MANIPULATION;
D O I
10.1109/LRA.2019.2956365
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
In this letter, we explore using self-supervised correspondence for improving the generalization performance and sample efficiency of visuomotor policy learning. Prior work has primarily used approaches such as autoencoding, pose-based losses, and end-to-end policy optimization in order to train the visual portion of visuomotor policies. We instead propose an approach using self-supervised dense visual correspondence training and show that this enables visuomotor policy learning with surprisingly high generalization performance with modest amounts of data. Using imitation learning, we demonstrate extensive hardware validation on challenging manipulation tasks with as few as 50 demonstrations. Our learned policies can generalize across classes of objects, react to deformable object configurations, and manipulate textureless symmetrical objects in a variety of backgrounds, all with closed-loop, real-time vision-based policies. Simulated imitation learning experiments suggest that correspondence training offers sample complexity and generalization benefits compared to autoencoding and end-to-end training.
引用
收藏
页码:492 / 499
页数:8
相关论文
共 41 条
[11]   A survey of robot learning from demonstration [J].
Argall, Brenna D. ;
Chernova, Sonia ;
Veloso, Manuela ;
Browning, Brett .
ROBOTICS AND AUTONOMOUS SYSTEMS, 2009, 57 (05) :469-483
[12]  
Billard A., 2008, Springer Handbook of Robotics, V59
[13]  
Chebotar Yevgen, 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA), P3381, DOI 10.1109/ICRA.2017.7989384
[14]  
Dafle NC, 2014, IEEE INT CONF ROBOT, P1578, DOI 10.1109/ICRA.2014.6907062
[15]  
Finn C, 2016, PR MACH LEARN RES, V48
[16]  
Finn C, 2016, IEEE INT CONF ROBOT, P512, DOI 10.1109/ICRA.2016.7487173
[17]  
Finn Chelsea, 2017, ONE SHOT VISUAL IMIT
[18]  
Florence Peter R, 2018, ARXIV180608756, P373
[19]  
Ghadirzadeh A, 2017, IEEE INT C INT ROBOT, P2351, DOI 10.1109/IROS.2017.8206046
[20]  
Kroemer Oliver, 2019, ARXIV190703146