Reward Learning from Narrated Demonstrations

被引:4
|
作者
Tung, Hsiao-Yu [1 ]
Harley, Adam W. [1 ]
Huang, Liang-Kang [1 ]
Fragkiadaki, Katerina [1 ]
机构
[1] Carnegie Mellon Univ, 5000 Forbes Ave, Pittsburgh, PA 15213 USA
来源
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2018年
关键词
D O I
10.1109/CVPR.2018.00732
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Humans effortlessly " program" one another by communicating goals and desires in natural language. In contrast, humans program robotic behaviours by indicating desired object locations and poses to be achieved , by providing RGB images of goal configurations or supplying a demonstration to be imitated . None of these methods generalize across environment variations, and they convey the goal in awkward technical terms. This work proposes joint learning of natural language grounding and instructable behavioural policies reinforced by perceptual detectors of natural language expressions, grounded to the sensory inputs of the robotic agent. Our supervision is narrated visual demonstrations (NVD), which are visual demonstrations paired with verbal narration (as opposed to being silent). We introduce a dataset of NVD where teachers perform activities while describing them in detail. We map the teachers' descriptions to perceptual reward detectors, and use them to train corresponding behavioural policies in simulation. We empirically show that our instructable agents (i) learn visual reward detectors using a small number of examples by exploiting hard negative mined configurations from demonstration dynamics, (ii) develop pick- and- place policies using learned visual reward detectors, (iii) benefit from object- factorized state representations that mimic the syntactic structure of natural language goal expressions, and (iv) can execute behaviours that involve novel objects in novel locations at test time, instructed by natural language.
引用
收藏
页码:7004 / 7013
页数:10
相关论文
共 50 条
  • [31] Robot Learning from Failed Demonstrations
    Grollman, Daniel H.
    Billard, Aude G.
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2012, 4 (04) : 331 - 342
  • [32] Learning Temporal Dynamics from Cycles in Narrated Video
    Epstein, Dave
    Wu, Jiajun
    Schmid, Cordelia
    Sun, Chen
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 1460 - 1469
  • [33] What's in a Primitive? Identifying Reusable Motion Trajectories in Narrated Demonstrations
    Mohseni-Kabir, Anahita
    Wu, Victoria
    Chernova, Sonia
    Rich, Charles
    2016 25TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2016, : 267 - 272
  • [34] Robot Learning from Failed Demonstrations
    Daniel H. Grollman
    Aude G. Billard
    International Journal of Social Robotics, 2012, 4 : 331 - 342
  • [35] Objective learning from human demonstrations
    Lin, Jonathan Feng-Shun
    Carreno-Medrano, Pamela
    Parsapour, Mahsa
    Sakr, Maram
    Kulic, Dana
    ANNUAL REVIEWS IN CONTROL, 2021, 51 : 111 - 129
  • [36] Learning a Behavioral Repertoire from Demonstrations
    Justesen, Niels
    Gonzalez-Duque, Miguel
    Cabarcas, Daniel
    Mouret, Jean-Baptiste
    Risi, Sebastian
    2020 IEEE CONFERENCE ON GAMES (IEEE COG 2020), 2020, : 383 - 390
  • [37] V-MIN: Efficient Reinforcement Learning through Demonstrations and Relaxed Reward Demands
    Martinez, David
    Alenya, Guillem
    Torras, Carme
    PROCEEDINGS OF THE TWENTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2015, : 2857 - 2863
  • [38] Human2bot: learning zero-shot reward functions for robotic manipulation from human demonstrations
    Yasir Salam
    Yinbei Li
    Jonas Herzog
    Jiaqiang Yang
    Autonomous Robots, 2025, 49 (2)
  • [39] Bavesian inverse reinforcement learning for demonstrations of an expert in multiple dynamics: Toward estimation of transferable reward
    Yusukc N.
    Sachiyo A.
    Transactions of the Japanese Society for Artificial Intelligence, 2020, 35 (01)
  • [40] Adversarial Imitation Learning from Incomplete Demonstrations
    Sun, Mingfei
    Xiaojuan
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 3513 - 3519