Decision Transformer: Reinforcement Learning via Sequence Modeling

被引:0
作者
Chen, Lili [1 ]
Lu, Kevin [1 ]
Rajeswaran, Aravind [2 ]
Lee, Kimin [1 ]
Grover, Aditya [2 ,3 ]
Laskin, Michael [1 ]
Abbeel, Pieter [1 ]
Srinivas, Aravind [4 ]
Mordatch, Igor [5 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
[2] Facebook AI Res, London, England
[3] Univ Calif Los Angeles, Los Angeles, CA 90024 USA
[4] OpenAI, San Francisco, CA USA
[5] Google Brain, New York, NY USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021) | 2021年 / 34卷
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We introduce a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem. This allows us to draw upon the simplicity and scalability of the Transformer architecture, and associated advances in language modeling such as GPT-x and BERT. In particular, we present Decision Transformer, an architecture that casts the problem of RL as conditional sequence modeling. Unlike prior approaches to RL that fit value functions or compute policy gradients, Decision Transformer simply outputs the optimal actions by leveraging a causally masked Transformer. By conditioning an autoregressive model on the desired return (reward), past states, and actions, our Decision Transformer model can generate future actions that achieve the desired return. Despite its simplicity, Decision Transformer matches or exceeds the performance of state-of-the-art model-free offline RL baselines on Atari, OpenAI Gym, and Key-to-Door tasks. [GRAPHICS]
引用
收藏
页数:14
相关论文
共 50 条
  • [11] Critic-Guided Decision Transformer for Offline Reinforcement Learning
    Wang, Yuanfu
    Yang, Chao
    Wen, Ying
    Liu, Yu
    Qiao, Yu
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 14, 2024, : 15706 - 15714
  • [12] Transformer Neural Processes: Uncertainty-Aware Meta Learning Via Sequence Modeling
    Tung Nguyen
    Grover, Aditya
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [13] SEQUENCE-TO-SEQUENCE ASR OPTIMIZATION VIA REINFORCEMENT LEARNING
    Tjandra, Andros
    Sakti, Sakriani
    Nakamura, Satoshi
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 5829 - 5833
  • [14] Waypoint Transformer: Reinforcement Learning via Supervised Learning with Intermediate Targets
    Badrinath, Anirudhan
    Flet-Berliac, Yannis
    Nie, Allen
    Brunskill, Emma
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [15] Addressing Optimism Bias in Sequence Modeling for Reinforcement Learning
    Villaflor, Adam
    Huang, Zhe
    Pande, Swapnil
    Dolan, John
    Schneider, Jeff
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [16] Robust Reinforcement Learning via Progressive Task Sequence
    Li, Yike
    Tian, Yunzhe
    Tong, Endong
    Niu, Wenjia
    Liu, Jiqiang
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 455 - 463
  • [17] Sequence Adaptation via Reinforcement Learning in Recommender Systems
    Antaris, Stefanos
    Rafailidis, Dimitrios
    15TH ACM CONFERENCE ON RECOMMENDER SYSTEMS (RECSYS 2021), 2021, : 714 - 718
  • [18] Autonomous Predictive Modeling via Reinforcement Learning
    Khurana, Udayan
    Samulowitz, Horst
    CIKM '20: PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, 2020, : 3285 - 3288
  • [19] Sequence labeling via reinforcement learning with aggregate labels
    Geromel, Marcel
    Cimiano, Philipp
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2024, 7
  • [20] Building Decision Forest via Deep Reinforcement Learning
    Hua, Hongzhi
    Wen, Guixuan
    Wu, Kaigui
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,