Decision Transformer: Reinforcement Learning via Sequence Modeling

被引:0
作者
Chen, Lili [1 ]
Lu, Kevin [1 ]
Rajeswaran, Aravind [2 ]
Lee, Kimin [1 ]
Grover, Aditya [2 ,3 ]
Laskin, Michael [1 ]
Abbeel, Pieter [1 ]
Srinivas, Aravind [4 ]
Mordatch, Igor [5 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
[2] Facebook AI Res, London, England
[3] Univ Calif Los Angeles, Los Angeles, CA 90024 USA
[4] OpenAI, San Francisco, CA USA
[5] Google Brain, New York, NY USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021) | 2021年 / 34卷
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We introduce a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem. This allows us to draw upon the simplicity and scalability of the Transformer architecture, and associated advances in language modeling such as GPT-x and BERT. In particular, we present Decision Transformer, an architecture that casts the problem of RL as conditional sequence modeling. Unlike prior approaches to RL that fit value functions or compute policy gradients, Decision Transformer simply outputs the optimal actions by leveraging a causally masked Transformer. By conditioning an autoregressive model on the desired return (reward), past states, and actions, our Decision Transformer model can generate future actions that achieve the desired return. Despite its simplicity, Decision Transformer matches or exceeds the performance of state-of-the-art model-free offline RL baselines on Atari, OpenAI Gym, and Key-to-Door tasks. [GRAPHICS]
引用
收藏
页数:14
相关论文
共 50 条
[41]   Safe Reinforcement Learning for CPSs via Formal Modeling and Verification [J].
Yang, Chenchen ;
Liu, Jing ;
Sun, Haiying ;
Sun, Junfeng ;
Chen, Xiang ;
Zhang, Lipeng .
2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
[42]   Modeling musical expectancy via reinforcement learning and directed graphs [J].
Phatnani, Kirtana Sunil ;
Patil, Hemant A. .
MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (10) :28523-28547
[43]   Trajectory modeling via random utility inverse reinforcement learning [J].
Pitombeira-Neto, Anselmo R. ;
Santos, Helano P. ;
da Silva, Ticiana L. Coelho ;
de Macedo, Jose Antonio F. .
INFORMATION SCIENCES, 2024, 660
[44]   Modeling musical expectancy via reinforcement learning and directed graphs [J].
Kirtana Sunil Phatnani ;
Hemant A. Patil .
Multimedia Tools and Applications, 2024, 83 :28523-28547
[45]   Towards Efficient Collaboration via Graph Modeling in Reinforcement Learning [J].
Fan, Wenzhe ;
Yu, Zishun ;
Ma, Chengdong ;
Li, Changye ;
Yang, Yaodong ;
Zhang, Xinhua .
THIRTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, AAAI-25, VOL 39 NO 16, 2025, :16505-16513
[46]   Decision Stacks: Flexible Reinforcement Learning via Modular Generative Models [J].
Zhao, Siyan ;
Grover, Aditya .
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
[47]   Building Decision Tree for Imbalanced Classification via Deep Reinforcement Learning [J].
Wen, Guixuan ;
Wu, Kaigui .
ASIAN CONFERENCE ON MACHINE LEARNING, VOL 157, 2021, 157 :1645-1659
[48]   Augmenting Reinforcement Learning With Transformer-Based Scene Representation Learning for Decision-Making of Autonomous Driving [J].
Liu, Haochen ;
Huang, Zhiyu ;
Mo, Xiaoyu ;
Lv, Chen .
IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2024, 9 (03) :4405-4421
[49]   ViFT: Visual field transformer for visual field testing via deep reinforcement learning [J].
Saeki, Shozo ;
Kawahara, Minoru ;
Aman, Hirohisa .
MEDICAL IMAGE ANALYSIS, 2025, 105
[50]   On Normative Reinforcement Learning via Safe Reinforcement Learning [J].
Neufeld, Emery A. ;
Bartocci, Ezio ;
Ciabattoni, Agata .
PRIMA 2022: PRINCIPLES AND PRACTICE OF MULTI-AGENT SYSTEMS, 2023, 13753 :72-89