Bootstrapped Transformer for Offline Reinforcement Learning

被引:0
作者
Wang, Kerong [1 ,3 ]
Zhao, Hanye [1 ,3 ]
Luo, Xufang [2 ]
Ren, Kan [2 ]
Zhang, Weinan [1 ]
Li, Dongsheng [2 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
[2] Microsoft Res Asia, Beijing, Peoples R China
[3] Microsoft Res, Beijing, Peoples R China
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022) | 2022年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Offline reinforcement learning (RL) aims at learning policies from previously collected static trajectory data without interacting with the real environment. Recent works provide a novel perspective by viewing offline RL as a generic sequence generation problem, adopting sequence models such as Transformer architecture to model distributions over trajectories, and repurposing beam search as a planning algorithm. However, the training datasets utilized in general offline RL tasks are quite limited and often suffer from insufficient distribution coverage, which could be harmful to training sequence generation In this paper, we propose a novel algorithm named Bootstrapped Transformer, which incorporates the idea of bootstrapping and leverages the learned model to self-generate more offline data to further boost the sequence model training. We conduct extensive experiments on two offline RL benchmarks and demonstrate that our model can largely remedy the existing offline RL training limitations and beat other strong baseline methods. We also analyze the generated pseudo data and the revealed characteristics may shed some light on offline RL training. The codes and supplementary materials are available at https://seqml.github.io/bootorl.
引用
收藏
页数:14
相关论文
共 53 条
[1]  
[Anonymous], 2021, JOINT EUR C MACH LEA, DOI DOI 10.1007/978-3-030-86486-67
[2]  
[Anonymous], 2017, ARXIV171102281
[3]  
Argenson Arthur, 2021, ICLR
[4]  
Bengio S, 2015, ADV NEUR IN, V28
[5]  
Chen Xu, 2021, ICCV
[6]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[7]  
Dong LH, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), P5884, DOI 10.1109/ICASSP.2018.8462506
[8]  
Fan Linxi, 2021, PR MACH LEARN RES, V139
[9]  
Fang YC, 2021, AAAI CONF ARTIF INTE, V35, P107
[10]  
Fu J., 2020, D4rl: Datasets for deep data -driven reinforcement learning