Imitation learning by state-only distribution matching

被引:0
作者
Damian Boborzi
Christoph-Nikolas Straehle
Jens S. Buchner
Lars Mikelsons
机构
[1] Augsburg University,
[2] Bosch Center for Artificial Intelligence,undefined
[3] Bosch GmbH,undefined
来源
Applied Intelligence | 2023年 / 53卷
关键词
Imitation learning; State-only; Normalizing flows; Reinforcement learning; Learning from observations;
D O I
暂无
中图分类号
学科分类号
摘要
Imitation Learning from observation describes policy learning in a similar way to human learning. An agent’s policy is trained by observing an expert performing a task. Although many state-only imitation learning approaches are based on adversarial imitation learning, one main drawback is that adversarial training is often unstable and lacks a reliable convergence estimator. If the true environment reward is unknown and cannot be used to select the best-performing model, this can result in bad real-world policy performance. We propose a non-adversarial learning-from-observations approach, together with an interpretable convergence and performance metric. Our training objective minimizes the Kulback-Leibler divergence (KLD) between the policy and expert state transition trajectories which can be optimized in a non-adversarial fashion. Such methods demonstrate improved robustness when learned density models guide the optimization. We further improve the sample efficiency by rewriting the KLD minimization as the Soft Actor Critic objective based on a modified reward using additional density models that estimate the environment’s forward and backward dynamics. Finally, we evaluate the effectiveness of our approach on well-known continuous control environments and show state-of-the-art performance while having a reliable performance estimator compared to several recent learning-from-observation methods.
引用
收藏
页码:30865 / 30886
页数:21
相关论文
empty
未找到相关数据