Safe and Sample-Efficient Reinforcement Learning for Clustered Dynamic Environments

被引:8
作者
Chen, Hongyi [1 ]
Liu, Changliu [2 ]
机构
[1] Georgia Inst Technol, Inst Robot & Intelligent Machines, Atlanta, GA 30332 USA
[2] Carnegie Mellon Univ, Inst Robot, Pittsburgh, PA 15213 USA
来源
IEEE CONTROL SYSTEMS LETTERS | 2022年 / 6卷
关键词
Safety; Robots; Heuristic algorithms; Training; Reinforcement learning; Indexes; Task analysis; safe control; robotics;
D O I
10.1109/LCSYS.2021.3136486
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This letter proposes a safe and sample-efficient reinforcement learning (RL) framework to address two major challenges in developing applicable RL algorithms: satisfying safety constraints and efficiently learning with limited samples. To guarantee safety in real-world complex environments, we use the safe set algorithm (SSA) to monitor and modify the nominal controls, and evaluate SSA+RL in a clustered dynamic environment which is challenging to be solved by existing RL algorithms. However, the SSA+RL framework is usually not sample-efficient especially in reward-sparse environments, which has not been addressed in previous safe RL works. To improve the learning efficiency, we propose three techniques: (1) avoiding behaving overly conservative by adapting the SSA; (2) encouraging safe exploration using random network distillation with safety constraints; (3) improving policy convergence by treating SSA as expert demonstrations and directly learn from that. The experimental results show that our framework can achieve better safety performance compare to other safe RL methods during training and solve the task with substantially fewer episodes.
引用
收藏
页码:1928 / 1933
页数:6
相关论文
共 24 条
[1]   Autonomous Helicopter Aerobatics through Apprenticeship Learning [J].
Abbeel, Pieter ;
Coates, Adam ;
Ng, Andrew Y. .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2010, 29 (13) :1608-1639
[2]  
Achiam J, 2017, PR MACH LEARN RES, V70
[3]  
Achiam Joshua, 2019, OPENAI, P1
[4]  
Buckman J, 2018, ADV NEUR IN, V31
[5]  
Burda Yuri, 2019, ICLR
[6]  
Cheng R, 2019, AAAI CONF ARTIF INTE, P3387
[7]   Challenges of real-world reinforcement learning: definitions, benchmarks and analysis [J].
Dulac-Arnold, Gabriel ;
Levine, Nir ;
Mankowitz, Daniel J. ;
Li, Jerry ;
Paduraru, Cosmin ;
Gowal, Sven ;
Hester, Todd .
MACHINE LEARNING, 2021, 110 (09) :2419-2468
[8]  
Fujimoto S, 2018, PR MACH LEARN RES, V80
[9]  
García J, 2015, J MACH LEARN RES, V16, P1437
[10]  
Gehring C., 2013, P 2013 INT C AUTONOM, P1037