Off-Dynamics Inverse Reinforcement Learning

被引:0
作者
Kang, Yachen [1 ,2 ]
Liu, Jinxin [2 ]
Wang, Donglin [2 ]
机构
[1] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310024, Peoples R China
[2] Westlake Univ, Sch Engn, Machine Intelligence Lab MiLAB, Hangzhou 310024, Peoples R China
关键词
Trajectory; Training; Task analysis; Reinforcement learning; Heuristic algorithms; Data models; Costs; Hetero-domain; imitation learning; inverse reinforcement learning; off-dynamics; transfer learning;
D O I
10.1109/ACCESS.2024.3394242
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Imitation learning is a widely-used paradigm for decision making that learns from expert demonstrations. Existing imitation algorithms often require multiple interactions between the agent and the environment from which the demonstration is obtained. The acquisition of expert demonstrations in simulator usually requires specialized knowledge. In addition, real-world interactions are limited due to security or cost concerns. Therefore, the direct application of existing imitation learning algorithms in either real world or simulator is not an ideal strategy. In this paper, we propose a cross-domain Inverse Reinforcement Learning training paradigm that learns a reward function from hetero-domain expert's demonstration, while the interaction with the environment that obtains demonstrations should be limited. In order to solve the distribution shift under such paradigm, we propose a transfer learning method called off-dynamics Inverse Reinforcement Learning. The intuition behind off-dynamics Inverse Reinforcement Learning is that the goal of reward function learning is not only to imitate experts, but also to promote action adaptation to the dynamic difference between two hetero-domain. Specifically, a widely-used Inverse Reinforcement Learning framework was adopted, and its discriminator for identifying agent-generated trajectories was modified with quantified dynamic differences. The training process of the discriminator yields the transferable reward function suitable for the target dynamics, which is guaranteed by our theoretical derivation. Off-dynamics Inverse Reinforcement Learning assigns higher rewards to demonstration trajectories that do not exploit discrepancies between the two domains. Our method demonstrates its effectiveness and scalability to high-dimensional tasks through extensive experiments on continuous control tasks. Our code is available on the project website: https://github.com/yachenkang/ODIRL.
引用
收藏
页码:65117 / 65127
页数:11
相关论文
共 31 条
[1]  
Brockman G, 2016, Arxiv, DOI arXiv:1606.01540
[2]  
Eysenbach B, 2021, Arxiv, DOI arXiv:2006.13916
[3]  
Fickinger A., 2022, P INT C LEARN REPR, P1
[4]  
Finn C, 2016, PR MACH LEARN RES, V48
[5]  
Finn C, 2016, Arxiv, DOI [arXiv:1611.03852, 10.48550/arXiv.1611.03852, DOI 10.48550/ARXIV.1611.03852]
[6]  
Franzmeyer T, 2022, ADV NEUR IN
[7]  
Fu JS, 2018, Arxiv, DOI [arXiv:1710.11248, 10.48550/arXiv.1710.11248]
[8]  
Gangwani T., 2022, P INT C LEARN REPR
[9]  
Gangwani T., 2020, P 8 INT C LEARN REPR, P1
[10]   Generative Adversarial Networks [J].
Goodfellow, Ian ;
Pouget-Abadie, Jean ;
Mirza, Mehdi ;
Xu, Bing ;
Warde-Farley, David ;
Ozair, Sherjil ;
Courville, Aaron ;
Bengio, Yoshua .
COMMUNICATIONS OF THE ACM, 2020, 63 (11) :139-144