CALVIN: A Benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasks

被引:46
作者
Mees, Oier [1 ]
Hermann, Lukas [1 ]
Rosete-Beas, Erick [1 ]
Burgard, Wolfram Burgard [1 ]
机构
[1] Univ Freiburg, D-79110 Freiburg, Germany
来源
IEEE ROBOTICS AND AUTOMATION LETTERS | 2022年 / 7卷 / 03期
关键词
Data sets for robot learning; machine learning for robot control; imitation learning; natural dialog for HRI;
D O I
10.1109/LRA.2022.3180108
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
General-purpose robots coexisting with humans in their environment must learn to relate human language to their perceptions and actions to be useful in a range of daily tasks. Moreover, they need to acquire a diverse repertoire of general-purpose skills that allow composing long-horizon tasks by following unconstrained language instructions. In this letter, we present Composing Actions from Language and Vision (CALVIN) (Composing Actions from Language and Vision), an open-source simulated benchmark to learn long-horizon language-conditioned tasks. Our aim is to make it possible to develop agents that can solve many robotic manipulation tasks over a long horizon, from onboard sensors, and specified only via human language. CALVIN tasks are more complex in terms of sequence length, action space, and language than existing vision-and-language task datasets and supports flexible specification of sensor suites. We evaluate the agents in zero-shot to novel language instructions and to novel environments. We show that a baseline model based on multi-context imitation learning performs poorly on CALVIN, suggesting that there is significant room for developing innovative agents that learn to relate human language to their world models with this benchmark.
引用
收藏
页码:7327 / 7334
页数:8
相关论文
共 38 条
  • [1] Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments
    Anderson, Peter
    Wu, Qi
    Teney, Damien
    Bruce, Jake
    Johnson, Mark
    Sunderhauf, Niko
    Reid, Ian
    Gould, Stephen
    van den Hengel, Anton
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 3674 - 3683
  • [2] Andrychowicz M., 2017, Proceedings of the Advances in Neural Information Processing Systems, V30, P1
  • [3] Blukis V., 2018, P C ROBOT LEARNING C, P505
  • [4] Coumans E., 2016, PYBULLET PYTHON MODU
  • [5] de Haan P, 2019, ADV NEUR IN, V32
  • [6] Hatori J, 2018, IEEE INT CONF ROBOT, P3774
  • [7] Jang E, 2021, PR MACH LEARN RES, V164, P991
  • [8] KAELBLING LP, 1993, IJCAI-93, VOLS 1 AND 2, P1094
  • [9] Kalashnikov D., 2022, C ROBOT LEARN, P557
  • [10] Kazemzadeh S., 2014, P 2014 C EMP METH NA, P787, DOI DOI 10.3115/V1/D14-1086