Empirical evaluation methods for multiobjective reinforcement learning algorithms

被引:150
作者
Vamplew, Peter [1 ]
Dazeley, Richard [1 ]
Berry, Adam [2 ]
Issabekov, Rustam [1 ]
Dekker, Evan [1 ]
机构
[1] Univ Ballarat, Grad Sch Informat Technol & Math Sci, Ballarat, Vic 3353, Australia
[2] CSIRO Energy Ctr, Mayfield W, NSW 2304, Australia
关键词
Multiobjective reinforcement learning; Multiple objectives; Empirical methods; Pareto fronts; Pareto optimal policies;
D O I
10.1007/s10994-010-5232-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
While a number of algorithms for multiobjective reinforcement learning have been proposed, and a small number of applications developed, there has been very little rigorous empirical evaluation of the performance and limitations of these algorithms. This paper proposes standard methods for such empirical evaluation, to act as a foundation for future comparative studies. Two classes of multiobjective reinforcement learning algorithms are identified, and appropriate evaluation metrics and methodologies are proposed for each class. A suite of benchmark problems with known Pareto fronts is described, and future extensions and implementations of this benchmark suite are discussed. The utility of the proposed evaluation methods are demonstrated via an empirical comparison of two example learning algorithms.
引用
收藏
页码:51 / 80
页数:30
相关论文
共 35 条
  • [1] AISSANI N, 2008, MOSIM 08
  • [2] [Anonymous], 2002, Evolutionary algorithms for solving multi-objective problems
  • [3] [Anonymous], 2002, IFAC WORKSH MOD CONT
  • [4] [Anonymous], 1985, Bandit Problems: Sequential Allocation of Experiments
  • [5] [Anonymous], 2006, TUTORIAL OPTIMIZERS
  • [6] [Anonymous], 2001, NIPS
  • [7] BARRETT L, 2008, P INT C MACH LEARN
  • [8] BERRY A, 2008, THESIS U TASMANIA
  • [9] BOYAN J, 1995, NIPS 7
  • [10] Chatterjee K, 2006, LECT NOTES COMPUT SC, V3884, P325