Improved Bayesian inverse reinforcement learning based on demonstration and feedback

被引:0
作者
Tang H. [1 ]
Wang A. [1 ]
Yang X. [1 ]
机构
[1] School of Information, Beijing Wuzi University, Beijing
基金
中国国家自然科学基金;
关键词
Bayesian rule; Demonstration and feedback; Inverse reinforcement learning; IRLDF algorithm;
D O I
10.1504/IJWMC.2019.103113
中图分类号
学科分类号
摘要
A major obstacle to traditional reinforcement learning is that rewards need to be artificially set to have a strong subjectivity. The inverse reinforcement learning algorithm solves this problem. Traditional inverse reinforcement learning requires an optimised demonstration, which is often not met in reality. Therefore, an interactive learning method was proposed to enhance the learned reward function by combining the feedback with the demonstration and using the improved Bayesian rule iteration of the imagery to improve the Agent strategy. The proposed method was tested in experimental and simulation tasks. The results showed that the efficiency of the method was significantly improved under different degrees of non-optimal proof. Copyright © 2019 Inderscience Enterprises Ltd.
引用
收藏
页码:361 / 366
页数:5
相关论文
共 17 条
  • [1] Barakova E.I., Lourens T., Mirror neuron framework yields representations for robot interaction, Neurocomputing, 72, 4-6, pp. 895-900, (2009)
  • [2] Barto A.G., Reinforcement learning, A Bradford Book, 15, 7, pp. 665-685, (1998)
  • [3] Choi J., Kim K.E., Nonparametric Bayesian inverse reinforcement learning for multiple reward functions, International Conference on Neural Information Processing Systems, 1, pp. 305-313, (2012)
  • [4] Doguc O., Ramirez-Marquez J.E., A generic method for estimating system reliability using Bayesian networks, Reliability Engineering and System Safety, 94, 2, pp. 542-550, (2017)
  • [5] Hourdakis E., Savaki H.E., Trahanias P., Computational modeling of cortical pathways involved in action execution and action observation, Neurocomputing, 74, 7, pp. 1135-1155, (2011)
  • [6] Imani M., Braga-Neto U., Control of gene regulatory networks using Bayesian inverse reinforcement learning, IEEE/ACM Transactions on Computational Biology and Bioinformatics, 99, pp. 1-1, (2018)
  • [7] Kangasraasio A., Kaski S., Inverse reinforcement learning from summary data, Machine Learning, 107, 8-10, pp. 1-19, (2018)
  • [8] Li Y.Y., Zhu Y.F., Yang F., Jia Q., Inverse reinforcement learning based optimal schedule generation approach for carrier aircraft on flight deck, Journal of National University of Defense Technology, 35, 4, (2013)
  • [9] Neu G., Szepesvari C., Apprenticeship learning using inverse reinforcement learning and gradient methods, Computer Science, pp. 1-8, (2012)
  • [10] Neu G., Szepesvari C., Apprenticeship learning using inverse reinforcement learning and gradient methods, Conference on the 23rd Uncertainty in Artificial Intelligence, pp. 295-302, (2007)