Model-based trajectory stitching for improved behavioural cloning and its applications

被引:3
作者
Hepburn, Charles A. [1 ]
Montana, Giovanni [2 ,3 ,4 ]
机构
[1] Univ Warwick, Math Inst, Coventry, England
[2] Univ Warwick, Dept Stat, Coventry, England
[3] Univ Warwick, WMG, Coventry, England
[4] Alan Turing Inst, London, England
基金
英国工程与自然科学研究理事会;
关键词
Offline reinforcement learning; Behavioural cloning; Imitation learning;
D O I
10.1007/s10994-023-06392-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Behavioural cloning (BC) is a commonly used imitation learning method to infer a sequential decision-making policy from expert demonstrations. However, when the quality of the data is not optimal, the resulting behavioural policy also performs sub-optimally once deployed. Recently, there has been a surge in offline reinforcement learning methods that hold the promise to extract high-quality policies from sub-optimal historical data. A common approach is to perform regularisation during training, encouraging updates during policy evaluation and/or policy improvement to stay close to the underlying data. In this work, we investigate whether an offline approach to improving the quality of the existing data can lead to improved behavioural policies without any changes in the BC algorithm. The proposed data improvement approach - Model-Based Trajectory Stitching (MBTS) - generates new trajectories (sequences of states and actions) by 'stitching' pairs of states that were disconnected in the original data and generating their connecting new action. By construction, these new transitions are guaranteed to be highly plausible according to probabilistic models of the environment, and to improve a state-value function. We demonstrate that the iterative process of replacing old trajectories with new ones incrementally improves the underlying behavioural policy. Extensive experimental results show that significant performance gains can be achieved using MBTS over BC policies extracted from the original data. Furthermore, using the D4RL benchmarking suite, we demonstrate that state-of-the-art results are obtained by combining MBTS with two existing offline learning methodologies reliant on BC, model-based offline planning (MBOP) and policy constraint (TD3+BC).
引用
收藏
页码:647 / 674
页数:28
相关论文
共 73 条
[1]  
An G., 2021, Advances in Neural Information Processing Systems, V34
[2]  
Argenson A, 2020, arXiv, DOI 10.48550/arXiv.2008.05556
[3]  
Arjovsky M, 2017, PR MACH LEARN RES, V70
[4]  
Bacci G, 2013, LECT NOTES COMPUT SC, V8087, P74, DOI 10.1007/978-3-642-40313-2_9
[5]  
Bacci G, 2013, LECT NOTES COMPUT SC, V7795, P1, DOI 10.1007/978-3-642-36742-7_1
[6]  
Bojarski M, 2016, Arxiv, DOI [arXiv:1604.07316, DOI 10.48550/ARXIV.1604.07316]
[7]  
Brockman G, 2016, Arxiv, DOI arXiv:1606.01540
[8]  
Buckman J, 2018, ADV NEUR IN, V31
[9]  
Castro PS, 2020, AAAI CONF ARTIF INTE, V34, P10069
[10]  
Char I., 2022, arXiv