Verifying Controllers With Vision-Based Perception Using Safe Approximate Abstractions

被引:17
作者
Hsieh, Chiao [1 ]
Li, Yangge [2 ]
Sun, Dawei [2 ]
Joshi, Keyur [1 ]
Misailovic, Sasa [2 ]
Mitra, Sayan [1 ]
机构
[1] Univ Illinois, Dept Comp Sci, Urbana, IL 61801 USA
[2] Univ Illinois, Dept Elect & Comp Engn, Urbana, IL 61801 USA
基金
美国食品与农业研究所;
关键词
Safety; Integrated circuit modeling; Control systems; Artificial neural networks; Computational modeling; Analytical models; Aerospace electronics; Abstraction; autonomous systems; formal verification; vision-based control; HYBRID SYSTEMS; INVARIANTS;
D O I
10.1109/TCAD.2022.3197508
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Fully formal verification of perception models is likely to remain challenging in the foreseeable future, and yet these models are being integrated into safety-critical control systems. We present a practical method for reasoning about the safety of such systems. Our method is based on systematically constructing approximations of perception models from system-level safety requirements, data, and program analysis of the modules that are downstream from perception. These approximations have some desirable properties like being low-dimensional, intelligible, and tractable. The closed-loop system, with the approximation substituting the actual perception model, is verified to be safe. Establishing the formal relationship between the actual and the approximate perception models remains well beyond available verification techniques. However, we do provide a useful empirical measure of their closeness called precision. Overall, our method can tradeoff the size of the approximation against precision. We apply the method to two significant case studies: 1) a vision-based lane tracking controller for an autonomous vehicle and 2) a controller for an agricultural robot. We show how the generated approximations for each system can be composed with the downstream modules and be verified using program analysis tools like CBMC. Detailed evaluations of the impacts of size, and the environmental parameters (e.g., lighting, road surface, and plant type) on the precision of the generated approximations suggest that the approach can be useful for realistic control systems.
引用
收藏
页码:4205 / 4216
页数:12
相关论文
共 52 条
[1]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[2]  
[Anonymous], 2006, ACM SIGSOFT Software Engineering Notes
[3]  
Baier C, 2021, Arxiv, DOI arXiv:2105.09533
[4]  
Bak S, 2021, Arxiv, DOI [arXiv:2109.00498, DOI 10.48550/ARXIV.2109.00498]
[5]  
Barnett M, 2005, LECT NOTES COMPUT SC, V3362, P49
[6]  
Bodria F, 2021, Arxiv, DOI arXiv:2102.13076
[7]  
Brat Guillaume, 2014, Software Engineering and Formal Methods. 12th International Conference, SEFM 2014. Proceedings: LNCS 8702, P271, DOI 10.1007/978-3-319-10431-7_20
[8]   Quantized feedback stabilization of linear systems [J].
Brockett, RW ;
Liberzon, D .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2000, 45 (07) :1279-1289
[9]   A tool for checking ANSI-C programs [J].
Clarke, E ;
Kroening, D ;
Lerda, F .
TOOLS AND ALGORITHMS FOR THE CONSTRUCTION AND ANALYSIS OF SYSTEMS, PROCEEDINGS, 2004, 2988 :168-176
[10]  
Dean S, 2020, PR MACH LEARN RES, V120, P350