Decision Transformer: Reinforcement Learning via Sequence Modeling

被引:0
作者
Chen, Lili [1 ]
Lu, Kevin [1 ]
Rajeswaran, Aravind [2 ]
Lee, Kimin [1 ]
Grover, Aditya [2 ,3 ]
Laskin, Michael [1 ]
Abbeel, Pieter [1 ]
Srinivas, Aravind [4 ]
Mordatch, Igor [5 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
[2] Facebook AI Res, London, England
[3] Univ Calif Los Angeles, Los Angeles, CA 90024 USA
[4] OpenAI, San Francisco, CA USA
[5] Google Brain, New York, NY USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021) | 2021年 / 34卷
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We introduce a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem. This allows us to draw upon the simplicity and scalability of the Transformer architecture, and associated advances in language modeling such as GPT-x and BERT. In particular, we present Decision Transformer, an architecture that casts the problem of RL as conditional sequence modeling. Unlike prior approaches to RL that fit value functions or compute policy gradients, Decision Transformer simply outputs the optimal actions by leveraging a causally masked Transformer. By conditioning an autoregressive model on the desired return (reward), past states, and actions, our Decision Transformer model can generate future actions that achieve the desired return. Despite its simplicity, Decision Transformer matches or exceeds the performance of state-of-the-art model-free offline RL baselines on Atari, OpenAI Gym, and Key-to-Door tasks. [GRAPHICS]
引用
收藏
页数:14
相关论文
共 50 条
[11]   Transformer Neural Processes: Uncertainty-Aware Meta Learning Via Sequence Modeling [J].
Tung Nguyen ;
Grover, Aditya .
INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
[12]   ATTEXPLAINER: Explain Transformer via Attention by Reinforcement Learning [J].
Niu, Runliang ;
Wei, Zhepei ;
Wang, Yan ;
Wang, Qi .
PROCEEDINGS OF THE THIRTY-FIRST INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2022, 2022, :724-731
[13]   PCDT: Pessimistic Critic Decision Transformer for Offline Reinforcement Learning [J].
Wang, Xuesong ;
Zhang, Hengrui ;
Zhang, Jiazhi ;
Chen, C. L. Philip ;
Cheng, Yuhu .
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2025,
[14]   Critic-Guided Decision Transformer for Offline Reinforcement Learning [J].
Wang, Yuanfu ;
Yang, Chao ;
Wen, Ying ;
Liu, Yu ;
Qiao, Yu .
THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 14, 2024, :15706-15714
[15]   SEQUENCE-TO-SEQUENCE ASR OPTIMIZATION VIA REINFORCEMENT LEARNING [J].
Tjandra, Andros ;
Sakti, Sakriani ;
Nakamura, Satoshi .
2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, :5829-5833
[16]   Waypoint Transformer: Reinforcement Learning via Supervised Learning with Intermediate Targets [J].
Badrinath, Anirudhan ;
Flet-Berliac, Yannis ;
Nie, Allen ;
Brunskill, Emma .
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
[17]   Learn to explain transformer via interpretation path by reinforcement learning [J].
Niu, Runliang ;
Wang, Qi ;
Kong, He ;
Xing, Qianli ;
Chang, Yi ;
Yu, Philip S. .
NEURAL NETWORKS, 2025, 188
[18]   Addressing Optimism Bias in Sequence Modeling for Reinforcement Learning [J].
Villaflor, Adam ;
Huang, Zhe ;
Pande, Swapnil ;
Dolan, John ;
Schneider, Jeff .
INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
[19]   Robust Reinforcement Learning via Progressive Task Sequence [J].
Li, Yike ;
Tian, Yunzhe ;
Tong, Endong ;
Niu, Wenjia ;
Liu, Jiqiang .
PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, :455-463
[20]   Sequence Adaptation via Reinforcement Learning in Recommender Systems [J].
Antaris, Stefanos ;
Rafailidis, Dimitrios .
15TH ACM CONFERENCE ON RECOMMENDER SYSTEMS (RECSYS 2021), 2021, :714-718