A probabilistic interpretation of self-paced learning with applications to reinforcement learning

被引:0
作者
Klink, Pascal [1 ]
Abdulsamad, Hany [1 ]
Belousov, Boris [1 ]
D'Eramo, Carlo [1 ]
Peters, Jan [1 ]
Pajarinen, Joni [1 ,2 ]
机构
[1] Intelligent Autonomous Systems, TU Darmstadt, Germany
[2] Department of Electrical Engineering and Automation, Aalto University, Finland
基金
欧盟地平线“2020”;
关键词
Economic and social effects - Curricula;
D O I
暂无
中图分类号
学科分类号
摘要
Across machine learning, the use of curricula has shown strong empirical potential to improve learning from data by avoiding local optima of training objectives. For reinforcement learning (RL), curricula are especially interesting, as the underlying optimization has a strong tendency to get stuck in local optima due to the exploration-exploitation trade-off. Recently, a number of approaches for an automatic generation of curricula for RL have been shown to increase performance while requiring less expert knowledge compared to manually designed curricula. However, these approaches are seldomly investigated from a theoretical perspective, preventing a deeper understanding of their mechanics. In this paper, we present an approach for automated curriculum generation in RL with a clear theoretical underpinning. More precisely, we formalize the well-known self-paced learning paradigm as inducing a distribution over training tasks, which trades off between task complexity and the objective to match a desired task distribution. Experiments show that training on this induced distribution helps to avoid poor local optima across RL algorithms in different tasks with uninformative rewards and challenging exploration requirements. © 2021 Pascal Klink, Hany Abdulsamad, Boris Belousov, Carlo D'Eramo, Jan Peters, Joni Pajarinen.
引用
收藏
相关论文
empty
未找到相关数据