Robot Learning with Sensorimotor Pre-training

被引:0
作者
Radosavovic, Ilija [1 ]
Shi, Baifeng [1 ]
Fu, Letian [1 ]
Goldberg, Ken [1 ]
Darrell, Trevor [1 ]
Malik, Jitendra [1 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
来源
CONFERENCE ON ROBOT LEARNING, VOL 229 | 2023年 / 229卷
关键词
Robot Learning; Self-supervised; Sensorimotor; Pre-training;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a self-supervised sensorimotor pre-training approach for robotics. Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens. Given a sequence of camera images, proprioceptive robot states, and actions, we encode the sequence into tokens, mask out a subset, and train a model to predict the missing content from the rest. We hypothesize that if a robot can predict the masked-out content it will have acquired a good model of the physical world that can enable it to act. RPT is designed to operate on latent visual representations which makes prediction tractable, enables scaling to larger models, and allows fast inference on a real robot. To evaluate our approach, we collected a dataset of 20,000 real-world trajectories over 9 months using a combination of motion planning and grasping algorithms. We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
引用
收藏
页数:11
相关论文
共 32 条
  • [1] Brohan A., 2022, arXiv
  • [2] Brown TB, 2020, ADV NEUR IN, V33
  • [3] Chen Mark, 2020, ICML
  • [4] Cui Z., 2022, ARXIV
  • [5] Damen Dima, 2021, IJCV
  • [6] Danielczuk M., 2020, ARXIV
  • [7] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [8] Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
  • [9] Dosovitskiy A., 2021, INT C LEARN REPRESEN
  • [10] Driess D., 2023, ARXIV