Receding Horizon Inverse Reinforcement Learning

被引:0
作者
Xu, Yiqing [1 ]
Gao, Wei [1 ]
Hsu, David [1 ,2 ]
机构
[1] Natl Univ Singapore, Sch Comp, Singapore, Singapore
[2] Natl Univ Singapore, Smart Syst Insitute, Singapore, Singapore
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022) | 2022年
基金
新加坡国家研究基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Inverse reinforcement learning (IRL) seeks to infer a cost function that explains the underlying goals and preferences of expert demonstrations. This paper presents receding-horizon inverse reinforcement learning (RHIRL), an IRL algorithm for high-dimensional, noisy, continuous systems with black-box dynamic models. RHIRL addresses two key challenges of IRL: scalability and robustness. To handle high-dimensional continuous systems, RHIRL matches the induced optimal trajectories with expert demonstrations locally in a receding horizon manner and "stitches" together the local solutions to learn the cost; it thereby avoids the "curse of dimensionality". This contrasts with earlier algorithms that match with expert demonstrations globally over the entire high-dimensional state space. To be robust against imperfect expert demonstrations and control noise, RHIRL learns a state-dependent cost function "disentangled" from system dynamics under mild conditions. Experiments on benchmark tasks show that RHIRL outperforms several leading IRL algorithms in most instances. We also prove that the cumulative error of RHIRL grows linearly with the task duration.
引用
收藏
页数:13
相关论文
共 41 条
[1]  
Boularias A., 2011, P INT C ART INT STAT
[2]  
Brockman G., 2016, ARXIV PREPRINT ARXIV
[3]  
Brown D. S., 2019, P C ROB LEARN
[4]  
BROWN DS, 2019, PR MACH LEARN RES, V97, pNI901
[5]  
Chen L., 2020, P C ROB LEARN
[6]  
Finn C., 2016, Advances in Neural Information Processing Systems, VVolume 29
[7]  
Finn C, 2016, PR MACH LEARN RES, V48
[8]  
Fu J., 2018, P INT C LEARN REPR
[9]  
Ghasemipour S. K. S., 2019, P C ROB LEARN
[10]  
Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672