Decision Transformer: Reinforcement Learning via Sequence Modeling

被引:0
|
作者
Chen, Lili [1 ]
Lu, Kevin [1 ]
Rajeswaran, Aravind [2 ]
Lee, Kimin [1 ]
Grover, Aditya [2 ,3 ]
Laskin, Michael [1 ]
Abbeel, Pieter [1 ]
Srinivas, Aravind [4 ]
Mordatch, Igor [5 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
[2] Facebook AI Res, London, England
[3] Univ Calif Los Angeles, Los Angeles, CA 90024 USA
[4] OpenAI, San Francisco, CA USA
[5] Google Brain, New York, NY USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021) | 2021年 / 34卷
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We introduce a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem. This allows us to draw upon the simplicity and scalability of the Transformer architecture, and associated advances in language modeling such as GPT-x and BERT. In particular, we present Decision Transformer, an architecture that casts the problem of RL as conditional sequence modeling. Unlike prior approaches to RL that fit value functions or compute policy gradients, Decision Transformer simply outputs the optimal actions by leveraging a causally masked Transformer. By conditioning an autoregressive model on the desired return (reward), past states, and actions, our Decision Transformer model can generate future actions that achieve the desired return. Despite its simplicity, Decision Transformer matches or exceeds the performance of state-of-the-art model-free offline RL baselines on Atari, OpenAI Gym, and Key-to-Door tasks. [GRAPHICS]
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Optimizing Attention for Sequence Modeling via Reinforcement Learning
    Fei, Hao
    Zhang, Yue
    Ren, Yafeng
    Ji, Donghong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (08) : 3612 - 3621
  • [2] Deep reinforcement learning navigation via decision transformer in autonomous driving
    Ge, Lun
    Zhou, Xiaoguang
    Li, Yongqiang
    Wang, Yongcong
    FRONTIERS IN NEUROROBOTICS, 2024, 18
  • [3] Causal Decision Transformer for Recommender Systems via Offline Reinforcement Learning
    Wang, Siyu
    Chen, Xiaocong
    Jannach, Dietmar
    Yao, Lina
    PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023, 2023, : 1599 - 1608
  • [4] Guided Reinforcement Learning via Sequence Learning
    Ramamurthy, Rajkumar
    Sifa, Rafet
    Luebbering, Max
    Bauckhage, Christian
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2020, PT II, 2020, 12397 : 335 - 345
  • [5] Feedback Decision Transformer: Offline Reinforcement Learning With Feedback
    Giladi, Liad
    Katz, Gilad
    23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING, ICDM 2023, 2023, : 1037 - 1042
  • [6] Transformer in reinforcement learning for decision-making: a survey
    Yuan, Weilin
    Chen, Jiaxing
    Chen, Shaofei
    Feng, Dawei
    Hu, Zhenzhen
    Li, Peng
    Zhao, Weiwei
    FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING, 2024, 25 (06) : 763 - 790
  • [7] Machining sequence learning via inverse reinforcement learning
    Sugisawa, Yasutomo
    Takasugi, Keigo
    Asakawa, Naoki
    PRECISION ENGINEERING-JOURNAL OF THE INTERNATIONAL SOCIETIES FOR PRECISION ENGINEERING AND NANOTECHNOLOGY, 2022, 73 : 477 - 487
  • [8] ATTEXPLAINER: Explain Transformer via Attention by Reinforcement Learning
    Niu, Runliang
    Wei, Zhepei
    Wang, Yan
    Wang, Qi
    PROCEEDINGS OF THE THIRTY-FIRST INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2022, 2022, : 724 - 731
  • [9] Critic-Guided Decision Transformer for Offline Reinforcement Learning
    Wang, Yuanfu
    Yang, Chao
    Wen, Ying
    Liu, Yu
    Qiao, Yu
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 14, 2024, : 15706 - 15714
  • [10] Transformer Neural Processes: Uncertainty-Aware Meta Learning Via Sequence Modeling
    Tung Nguyen
    Grover, Aditya
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,