Joint Estimation of Expertise and Reward Preferences From Human Demonstrations

被引:2
作者
Carreno-Medrano, Pamela [1 ]
Smith, Stephen L. [2 ]
Kulic, Dana [1 ]
机构
[1] Monash Univ, Fac Engn, Melbourne, Vic 3800, Australia
[2] Univ Waterloo, Dept Elect & Comp Engn, Fac Engn, Waterloo, ON N2L 3G1, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Robots; Task analysis; Behavioral sciences; Linear programming; Hidden Markov models; Reliability; Predictive models; Expertise inference; human factors; learning and adaptive systems; learning from demonstration;
D O I
10.1109/TRO.2022.3192969
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
When a robot learns from human examples, most approaches assume that the human partner provides examples of optimal behavior. However, there are applications in which the robot learns from nonexpert humans. We argue that the robot should learn not only about the human's objectives, but also about their expertise level. The robot could then leverage this joint information to reduce or increase the frequency at which it provides assistance to its human's partner or be more cautious when learning new skills from novice users. Similarly, by taking into account the human's expertise, the robot would also be able to infer a human's true objectives even when the human fails to properly demonstrate these objectives due to a lack of expertise. In this article, we propose to jointly infer the expertise level and the objective function of a human given observations of their (possibly) nonoptimal demonstrations. Two inference approaches are proposed. In the first approach, inference is done over a finite discrete set of possible objective functions and expertise levels. In the second approach, the robot optimizes over the space of all possible hypotheses and finds the objective function and the expertise level that best explain the observed human behavior. We demonstrate our proposed approaches both in simulation and with real user data.
引用
收藏
页码:681 / 698
页数:18
相关论文
共 38 条
[1]   A survey of inverse reinforcement learning: Challenges, methods and progress [J].
Arora, Saurabh ;
Doshi, Prashant .
ARTIFICIAL INTELLIGENCE, 2021, 297 (297)
[2]  
Biyik E., 2020, P C ROB LEARN, P1177
[3]  
Biyik E., 2020, P ROBOTICS SCI SYSTE
[4]  
Blidaru A, 2018, IEEE ROMAN, P72, DOI 10.1109/ROMAN.2018.8525546
[5]   Quantifying Hypothesis Space Misspecification in Learning From Human-Robot Demonstrations and Physical Corrections [J].
Bobu, Andreea ;
Bajcsy, Andrea ;
Fisac, Jaime F. ;
Deglurkar, Sampada ;
Dragan, Anca D. .
IEEE TRANSACTIONS ON ROBOTICS, 2020, 36 (03) :835-854
[6]  
Brown Daniel, 2020, 25 AM C INF SYST, P1165
[7]  
Brown DS, 2019, PR MACH LEARN RES, V97
[8]  
Carreno-Medrano P., 2019, PROC 28 IEEE INT C R, P1
[9]  
Chan L., 2021, ARXIV211106956, P2021
[10]  
Christiano PF, 2017, ADV NEUR IN, V30