Active Preference-Based Gaussian Process Regression for Reward Learning

被引:0
作者
Biyik, Lirdem [1 ]
Huynh, Nicolas [2 ]
Kochenderfer, Mykel J. [3 ]
Sadigh, Dorsa [4 ]
机构
[1] Stanford Univ, Elect Engn, Stanford, CA 94305 USA
[2] Ecole Polytech, Appl Math, Palaiseau, France
[3] Stanford Univ, Aeronaut & Astronaut, Stanford, CA 94305 USA
[4] Stanford Univ, Comp Sci, Stanford, CA 94305 USA
来源
ROBOTICS: SCIENCE AND SYSTEMS XVI | 2020年
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Designing reward functions is a challenging problem in AI and robotics. Humans usually have a difficult time directly specifying all the desirable behaviors that a robot needs to optimize. One common approach is to learn reward functions from collected expert demonstrations. However, learning reward functions from demonstrations introduces many challenges: some methods require highly structured models, e.g. reward functions that are linear in some predefined set of features, while others adopt less structured reward functions that on the other hand require tremendous amount of data. In addition, humans tend to have a difficult time providing demonstrations on robots with high degrees of freedom, or even quantifying reward values for given demonstrations. To address these challenges, we present a preference-based learning approach, where as an alternative, the human feedback is only in the form of comparisons between trajectories. Furthermore, we do not assume highly constrained structures on the reward function. Instead, we model the reward function using a Gaussian Process (GP) and propose a mathematical formulation to actively find a GP using only human preferences. Our approach enables us to tackle both inflexibility and data-inefficiency problems within a preference-based learning framework. Our results in simulations and a user study suggest that our approach can efficiently learn expressive reward functions for robotics tasks.
引用
收藏
页数:10
相关论文
共 50 条
[1]  
Abbeel P., 2005, P INT C MACH LEARN, DOI [10.1145/1102351.1102352, 10.1145/1102351, DOI 10.1145/1102351]
[2]   Keyframe-based Learning from Demonstration Method and Evaluation [J].
Akgun, Baris ;
Cakmak, Maya ;
Jiang, Karl ;
Thomaz, Andrea L. .
INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2012, 4 (04) :343-355
[3]  
Akrour Riad, 2012, Machine Learning and Knowledge Discovery in Databases. Proceedings of the European Conference (ECML PKDD 2012), P116, DOI 10.1007/978-3-642-33486-3_8
[4]  
[Anonymous], 2000, Icml
[5]  
[Anonymous], 2018, ADV NEURAL INFORM PR
[6]  
[Anonymous], 2004, P 21 INT C MACH LEAR
[7]  
[Anonymous], 2017, ROBOTICS SCI SYSTEMS
[8]  
[Anonymous], Aaai
[9]  
[Anonymous], 2012, ADV NEURAL INFORM PR
[10]  
Bajcsy A, 2017, PR MACH LEARN RES, V78