Using physical demonstrations, background knowledge and vocal comments for task learning

被引:3
|
作者
Pardowitz, M. [1 ]
Zoellner, R. [1 ]
Knoop, S. [1 ]
Dillmann, R. [1 ]
机构
[1] Univ Karlsruhe, Inst Comp Sci & Engn, POB 6980, D-76138 Karlsruhe, Germany
来源
2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-12 | 2006年
关键词
D O I
10.1109/IROS.2006.282506
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Robot assistants in the same environment with humans have to interact with humans and learn or at least adapt to individual human needs. One of the core abilities is learning from human demonstrations, were the robot is supposed to observe the execution of a task, acquire task knowledge and reproduce it. In this paper, a system to interpret and reason over demonstrations of household tasks is presented. The focus is on the model based representation of manipulation tasks, which serves as a basis for reasoning over the acquired task knowledge. The aim of the reasoning is to condense and interconnect the knowledge. A measure for the assessment of information content of task features is introduced that relies both on general background knowledge as well as task-specific knowledge gathered from the user demonstrations. Beside the autonomous information estimation of features, speech comments during the execution, pointing out the relevance of features are considered as well.
引用
收藏
页码:322 / +
页数:2
相关论文
共 50 条
  • [1] Incremental learning of tasks from user demonstrations, past experiences, and vocal comments
    Pardowitz, Michael
    Knoop, Steffen
    Dillmann, Ruediger
    Zoellner, Raoul D.
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2007, 37 (02): : 322 - 332
  • [2] Incorporating Background Knowledge into Dialogue Generation Using Multi-task Transformer Learning
    Yuan, Yiming
    Cai, Xiantao
    PROCEEDINGS OF THE 2021 IEEE 24TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN (CSCWD), 2021, : 1046 - 1051
  • [3] Learning Task Priorities From Demonstrations
    Silverio, Joao
    Calinon, Sylvain
    Rozo, Leonel
    Caldwell, Darwin G.
    IEEE TRANSACTIONS ON ROBOTICS, 2019, 35 (01) : 78 - 94
  • [4] Learning Task Specifications from Demonstrations
    Vazquez-Chanlatte, Marcell
    Jha, Susmit
    Tiwari, Ashish
    Ho, Mark K.
    Seshia, Sanjit A.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [5] Extracting Kinematic Background Knowledge from Interactions Using Task-Sensitive Relational Learning
    Hoefer, Sebastian
    Lang, Tobias
    Brock, Oliver
    2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2014, : 4342 - 4347
  • [6] Learning task goals interactively with visual demonstrations
    Kirk, James
    Mininger, Aaron
    Laird, John
    BIOLOGICALLY INSPIRED COGNITIVE ARCHITECTURES, 2016, 18 : 1 - 8
  • [7] Learning Temporal Task Specifications From Demonstrations
    Baert, Mattijs
    Leroux, Sam
    Simoens, Pieter
    EXPLAINABLE AND TRANSPARENT AI AND MULTI-AGENT SYSTEMS, EXTRAAMAS 2024, 2024, 14847 : 81 - 98
  • [8] Shaping in reinforcement learning by knowledge transferred from human-demonstrations of a simple similar task
    Wang, Guo-Fang
    Fang, Zhou
    Li, Ping
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2018, 34 (01) : 711 - 720
  • [9] Using background knowledge in Multilayer Perceptron learning
    Lampinen, J
    Selonen, A
    SCIA '97 - PROCEEDINGS OF THE 10TH SCANDINAVIAN CONFERENCE ON IMAGE ANALYSIS, VOLS 1 AND 2, 1997, : 545 - 549
  • [10] Verification of medical guidelines using background knowledge in task networks
    Hommersom, Arjen
    Groot, Perry
    Lucas, Peter J. F.
    Balser, Michael
    Schmitt, Jonathan
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2007, 19 (06) : 832 - 846