Expressing Arbitrary Reward Functions as Potential-Based Advice

被引:0
作者
Harutyunyan, Anna [1 ]
Devlin, Sam [2 ]
Vrancx, Peter [1 ]
Nowe, Ann [1 ]
机构
[1] Vrije Univ Brussel, Brussels, Belgium
[2] Univ York, York, N Yorkshire, England
来源
PROCEEDINGS OF THE TWENTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE | 2015年
关键词
STOCHASTIC-APPROXIMATION; CONVERGENCE; TIME;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Effectively incorporating external advice is an important problem in reinforcement learning, especially as it moves into the real world. Potential-based reward shaping is a way to provide the agent with a specific form of additional reward, with the guarantee of policy invariance. In this work we give a novel way to incorporate an arbitrary reward function with the same guarantee, by implicitly translating it into the specific form of dynamic advice potentials, which are maintained as an auxiliary value function learnt at the same time. We show that advice provided in this way captures the input reward function in expectation, and demonstrate its efficacy empirically.
引用
收藏
页码:2652 / 2658
页数:7
相关论文
共 26 条
[1]  
[Anonymous], 1990, The behavior of organisms: An experimental analysis
[2]  
[Anonymous], 2008, AAAI C ARTIFICIAL IN
[3]  
[Anonymous], 1997, ROBOT SHAPING EXPT B
[4]  
[Anonymous], 1994, MACHINE LEARNING, DOI DOI 10.1016/B978-1-55860-335-6.50030-1
[5]  
[Anonymous], 2003, THESIS U CALIFORNIA
[6]  
Bellman R. E., 1957, Dynamic programming. Princeton landmarks in mathematics
[7]   Stochastic approximation with two time scales [J].
Borkar, VS .
SYSTEMS & CONTROL LETTERS, 1997, 29 (05) :291-294
[8]   The ODE method for convergence of stochastic approximation and reinforcement learning [J].
Borkar, VS ;
Meyn, SP .
SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2000, 38 (02) :447-469
[9]  
Brys T, 2014, AAAI CONF ARTIF INTE, P1687
[10]  
Devlin S. M., 2012, P 11 INT C AUTONOMOU, P433