Verifiably Safe Off-Model Reinforcement Learning

被引:36
作者
Fulton, Nathan [1 ]
Platzer, Andre [1 ]
机构
[1] Carnegie Mellon Univ, Dept Comp Sci, Pittsburgh, PA 15213 USA
来源
TOOLS AND ALGORITHMS FOR THE CONSTRUCTION AND ANALYSIS OF SYSTEMS, PT I | 2019年 / 11427卷
关键词
D O I
10.1007/978-3-030-17462-0_28
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The desire to use reinforcement learning in safety-critical settings has inspired a recent interest in formal methods for learning algorithms. Existing formal methods for learning and optimization primarily consider the problem of constrained learning or constrained optimization. Given a single correct model and associated safety constraint, these approaches guarantee efficient learning while provably avoiding behaviors outside the safety constraint. Acting well given an accurate environmental model is an important pre-requisite for safe learning, but is ultimately insufficient for systems that operate in complex heterogeneous environments. This paper introduces verification-preserving model updates, the first approach toward obtaining formal safety guarantees for reinforcement learning in settings where multiple possible environmental models must be taken into account. Through a combination of inductive data and deductive proving with design-time model updates and runtime model falsification, we provide a first approach toward obtaining formal safety proofs for autonomous systems acting in heterogeneous environments.
引用
收藏
页码:413 / 430
页数:18
相关论文
共 32 条
[1]  
Achiam J, 2017, PR MACH LEARN RES, V70
[2]   Falsification of Cyber-Physical Systems Using Deep Reinforcement Learning [J].
Akazaki, Takumi ;
Liu, Shuang ;
Yamagata, Yoriyuki ;
Duan, Yihai ;
Hao, Jianye .
FORMAL METHODS, 2018, 10951 :456-465
[3]  
Alshiekh M, 2018, AAAI CONF ARTIF INTE, P2669
[4]  
Alur R., 1993, Hybrid Systems, P209
[5]   Effective synthesis of switching controllers for linear systems [J].
Asarin, E ;
Bournez, O ;
Dang, T ;
Maler, O ;
Pnueli, A .
PROCEEDINGS OF THE IEEE, 2000, 88 (07) :1011-1025
[6]   Recent Advances in Hierarchical Reinforcement Learning [J].
Andrew G. Barto ;
Sridhar Mahadevan .
Discrete Event Dynamic Systems, 2003, 13 (4) :341-379
[7]  
Bloem Nils., 2018, ABS180706096 CORR
[8]  
Bohrer B, 2018, ACM SIGPLAN NOTICES, V53, P617, DOI [10.1145/3192366.3192406, 10.1145/3296979.3192406]
[9]  
Brázdil T, 2014, LECT NOTES COMPUT SC, V8837, P98, DOI 10.1007/978-3-319-11936-6_8
[10]  
Fridovich-Keil D, 2018, IEEE INT CONF ROBOT, P387