Decision Stacks: Flexible Reinforcement Learning via Modular Generative Models

被引:0
作者
Zhao, Siyan [1 ]
Grover, Aditya [1 ]
机构
[1] Univ Calif Los Angeles, Dept Comp Sci, Los Angeles, CA 90024 USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023) | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reinforcement learning presents an attractive paradigm to reason about several distinct aspects of sequential decision making, such as specifying complex goals, planning future observations and actions, and critiquing their utilities. However, the combined integration of these capabilities poses competing algorithmic challenges in retaining maximal expressivity while allowing for flexibility in modeling choices for efficient learning and inference. We present Decision Stacks, a generative framework that decomposes goal-conditioned policy agents into 3 generative modules. These modules simulate the temporal evolution of observations, rewards, and actions via independent generative models that can be learned in parallel via teacher forcing. Our framework guarantees both expressivity and flexibility in designing individual modules to account for key factors such as architectural bias, optimization objective and dynamics, transferrability across domains, and inference speed. Our empirical results demonstrate the effectiveness of Decision Stacks for offline policy optimization for several MDP and POMDP environments, outperforming existing methods and enabling flexible generative decision making.(1)
引用
收藏
页数:18
相关论文
共 49 条
[1]  
Ahn Michael, 2022, C ROBOT LEARNING
[2]  
Ajay Anurag, 2022, ARXIV221115657
[3]  
[Anonymous], 2019, PMLR
[4]  
[Anonymous], 2017, PMLR
[5]  
Chen H., 2022, ARXIV220914548
[6]  
Chen LL, 2021, ADV NEUR IN, V34
[7]  
Dai Yilun, 2023, ARXIV230200111
[8]  
Emmons S., 2021, ARXIV211210751
[9]  
Fu J., 2020, arXiv preprint arXiv:2004.07219
[10]  
Fujimoto S, 2021, ADV NEUR IN, V34