Dueling Posterior Sampling for Preference-Based Reinforcement Learning

被引:0
作者
Novoseller, Ellen R. [1 ]
Wei, Yibing [1 ]
Sui, Yanan [2 ]
Yue, Yisong [1 ]
Burdick, Joel W. [1 ]
机构
[1] CALTECH, Dept Comp & Math Sci, Pasadena, CA 91125 USA
[2] Tsinghua Univ, Sch Aerosp Engn, Beijing 100084, Peoples R China
来源
CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI 2020) | 2020年 / 124卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In preference-based reinforcement learning (RL), an agent interacts with the environment while receiving preferences instead of absolute feedback. While there is increasing research activity in preference-based RL, the design of formal frameworks that admit tractable theoretical analysis remains an open challenge. Building upon ideas from preference-based bandit learning and posterior sampling in RL, we present DUELING POSTERIOR SAMPLING (DPS), which employs preference-based posterior sampling to learn both the system dynamics and the underlying utility function that governs the preference feedback. As preference feedback is provided on trajectories rather than individual state-action pairs, we develop a Bayesian approach for the credit assignment problem, translating preferences to a posterior distribution over state-action reward models. We prove an asymptotic Bayesian no-regret rate for DPS with a Bayesian linear regression credit assignment model. This is the first regret guarantee for preference-based RL to our knowledge. We also discuss possible avenues for extending the proof methodology to other credit assignment models. Finally, we evaluate the approach empirically, showing competitive performance against existing baselines.
引用
收藏
页码:1029 / 1038
页数:10
相关论文
共 54 条
[1]  
Abbasi-Yadkori Yasin, 2011, P ADV NEUR INF PROC, V24
[2]   Linear Thompson sampling revisited [J].
Abeille, Marc ;
Lazaric, Alessandro .
ELECTRONIC JOURNAL OF STATISTICS, 2017, 11 (02) :5165-5197
[3]  
Agrawal Shipra, 2017, Advances in Neural Information Processing Systems, P1184
[4]  
Ailon N, 2014, PR MACH LEARN RES, V32, P856
[5]  
Akrour Riad, 2012, Machine Learning and Knowledge Discovery in Databases. Proceedings of the European Conference (ECML PKDD 2012), P116, DOI 10.1007/978-3-642-33486-3_8
[6]  
Akrour R, 2014, PR MACH LEARN RES, V32, P1503
[7]  
Amodei Dario, 2016, PREPRINT, DOI 10.48550/ARXIV.1606.06565
[8]  
[Anonymous], 2007, Proceedings of the Advances in Neural Information Processing Systems, DOI DOI 10.1007/S10994-010-5185-8
[9]  
[Anonymous], 2012, Advances in neural information processing systems
[10]  
[Anonymous], 2013, P ADV NEURAL INFORM