Feature Expansive Reward Learning: Rethinking Human Input

被引:19
作者
Bobu, Andreea. [1 ]
Wiggert, Marius [1 ]
Tomlin, Claire [1 ]
Dragan, Anca D. [1 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
来源
2021 16TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, HRI | 2021年
关键词
robot learning from human input; human teachers;
D O I
10.1145/3434073.3444667
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
When a person is not satisfied with how a robot performs a task, they can intervene to correct it. Reward learning methods enable the robot to adapt its reward function online based on such human input, but they rely on handcrafted features. When the correction cannot be explained by these features, recent work in deep Inverse Reinforcement Learning (IRE) suggests that the robot could ask for task demonstrations and recover a reward defined over the raw state space. Our insight is that rather than implicitly learning about the missing feature(s) from demonstrations, the robot should instead ask for data that explicitly teaches it about what it is missing. We introduce a new type of htunan input in which the person guides the robot from states where the feature being taught is highly expressed to states where it is not. We propose an algoritlun for learning the feature from the raw state space and integrating it into the reward function. By focusing the human input on the missing feature, our method decreases sample complexity and improves generalization of the learned reward over the above deep IRE, baseline. We show [Ins in experiments with a physical 7DOF robot manipulator, as well as in a user study conducted in a simulated environment.
引用
收藏
页码:216 / 224
页数:9
相关论文
共 30 条
[1]  
[Anonymous], 2010, Advances in Neural Information Processing Systems
[2]  
[Anonymous], Finding locally optimal, collision-free trajectories with sequential convex optimization
[3]  
Arjovsky M, 2020, Arxiv, DOI [arXiv:1907.02893, 10.48550/arXiv.1907.02893]
[4]  
Bajcsy A., 2017, CORL
[5]   Learning from Physical Human Corrections, One Feature at a Time [J].
Bajcsy, Andrea ;
Losey, Dylan P. ;
O'Malley, Marcia K. ;
Dragan, Anca D. .
HRI '18: PROCEEDINGS OF THE 2018 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2018, :141-149
[6]  
Baker Chris L, 2007, Goal inference as inverse planning, V29
[7]   Quantifying Hypothesis Space Misspecification in Learning From Human-Robot Demonstrations and Physical Corrections [J].
Bobu, Andreea ;
Bajcsy, Andrea ;
Fisac, Jaime F. ;
Deglurkar, Sampada ;
Dragan, Anca D. .
IEEE TRANSACTIONS ON ROBOTICS, 2020, 36 (03) :835-854
[8]  
BRADLEY RA, 1952, BIOMETRIKA, V39, P324, DOI 10.1093/biomet/39.3-4.324
[9]  
Choi JD, 2011, J MACH LEARN RES, V12, P691
[10]  
Choi Jaedeug, 2013, TWENTYTHIRD INT JOIN