Guided Soft Actor Critic: A Guided Deep Reinforcement Learning Approach for Partially Observable Markov Decision Processes

被引:9
作者
Haklidir, Mehmet [1 ,2 ]
Temeltas, Hakan [1 ]
机构
[1] Istanbul Tech Univ, Dept Control & Automat Engn, TR-34467 Istanbul, Turkey
[2] TUBITAK Informat & Informat Secur Res Ctr, Informat Technol Inst, TR-41470 Kocaeli, Turkey
基金
中国国家自然科学基金;
关键词
Reinforcement learning; Markov processes; Task analysis; Training; Taxonomy; Supervised learning; Licenses; Deep reinforcement learning; guided policy search; POMDP;
D O I
10.1109/ACCESS.2021.3131772
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Most real-world problems are essentially partially observable, and the environmental model is unknown. Therefore, there is a significant need for reinforcement learning approaches to solve them, where the agent perceives the state of the environment partially and noisily. Guided reinforcement learning methods solve this issue by providing additional state knowledge to reinforcement learning algorithms during the learning process, allowing them to solve a partially observable Markov decision process (POMDP) more effectively. However, these guided approaches are relatively rare in the literature, and most existing approaches are model-based, meaning that they require learning an appropriate model of the environment first. In this paper, we propose a novel model-free approach that combines the soft actor-critic method and supervised learning concept to solve real-world problems, formulating them as POMDPs. In experiments performed on OpenAI Gym, an open-source simulation platform, our guided soft actor-critic approach outperformed other baseline algorithms, gaining 7 similar to 20% more maximum average return on five partially observable tasks constructed based on continuous control problems and simulated in MuJoCo.
引用
收藏
页码:159672 / 159683
页数:12
相关论文
共 75 条
[1]  
Aggarwal C C, 2018, Neural networks and deep learning: a textbookM, DOI DOI 10.1007/978-3-319-94463-0
[2]  
Andrychowicz M., 2017, ARXIV170701495
[3]  
[Anonymous], 2018, NeurIPS
[4]  
[Anonymous], 2017, ICLR
[5]  
[Anonymous], 2017, arXiv
[6]  
[Anonymous], 2017, MULTIDISCIPLINARY C
[7]  
[Anonymous], 2018, P ICLR
[8]  
[Anonymous], ADV NEURAL INFORM PR
[9]  
Azizzadenesheli K., 2016, P WORKSH C, V49, P1
[10]  
Azizzadenesheli Kamyar, 2018, ARXIV181007900