Robust Imitation via Mirror Descent Inverse Reinforcement Learning

被引:0
作者
Han, Dong-Sig [1 ]
Kim, Hyunseo [1 ]
Lee, Hyundo [1 ]
Ryu, Je-Hwan [1 ]
Zhang, Byoung-Tak [1 ]
机构
[1] Seoul Natl Univ, Artificial Intelligence Inst, Seoul, South Korea
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022) | 2022年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, adversarial imitation learning has shown a scalable reward acquisition method for inverse reinforcement learning (IRL) problems. However, estimated reward signals often become uncertain and fail to train a reliable statistical model since the existing methods tend to solve hard optimization problems directly. Inspired by a first-order optimization method called mirror descent, this paper proposes to predict a sequence of reward functions, which are iterative solutions for a constrained convex problem. IRL solutions derived by mirror descent are tolerant to the uncertainty incurred by target density estimation since the amount of reward learning is regulated with respect to local geometric constraints. We prove that the proposed mirror descent update rule ensures robust minimization of a Bregman divergence in terms of a rigorous regret bound of O(1/T) for step sizes {eta(t)}(t=1)(T). Our IRL method was applied on top of an adversarial framework, and it outperformed existing adversarial methods in an extensive suite of benchmarks.
引用
收藏
页数:13
相关论文
共 56 条
  • [51] Yang W., 2019, P 33 C ADV NEUR INF, P5938
  • [52] Zhan Wenhao, 2021, ABS210511066 CORR
  • [53] Zhao Junbo Jake, 2017, 5 INT C LEARN REPR I
  • [54] Zhu Y, 2018, P ROB SCI SYST
  • [55] Ziebart B.D., 2008, AAAI C ARTIFICIAL IN, V8, P1433
  • [56] Ziebart B.D., 2010, P 27 INT C MACH LEAR, P1255