Modelling and Verifying Robotic Software that Uses Neural Networks

被引:0
作者
Attala, Ziggy [1 ]
Cavalcanti, Ana [1 ]
Woodcock, Jim [1 ]
机构
[1] Univ York, York, N Yorkshire, England
来源
THEORETICAL ASPECTS OF COMPUTING, ICTAC 2023 | 2023年 / 14446卷
基金
英国工程与自然科学研究理事会;
关键词
verification; CSP; theorem proving; Isabelle; Marabou; COLLISION-AVOIDANCE; VERIFICATION;
D O I
10.1007/978-3-031-47963-2_3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Verifying learning robotic systems is challenging. Existing techniques and tools for verification of an artificial neural network (ANN) are concerned with component-level properties. Here, we deal with robotic systems whose control software uses ANN components, and with properties of that software that may depend on all components. Our focus is on trained fully connected ReLU neural networks for control. We present an approach to (1) modelling ANN components as part of behavioural models for control software and (2) verification using traditional and ANN-specific verification tools. We describe our results in the context of RoboChart, a domain-specific modelling language for robotics with support for formal verification. We describe our modelling notation and a strategy for automated proof using Isabelle and Marabou, for example.
引用
收藏
页码:15 / 35
页数:21
相关论文
共 50 条
  • [1] Modelling and Verifying Dynamic Properties of Biological Neural Networks in Coq
    Bahrami, Abdorrahim
    De Maria, Elisabetta
    Felty, Amy
    9TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL SYSTEMS-BIOLOGY AND BIOINFORMATICS (CSBIO 2018), 2018,
  • [2] Verifying Industrial Robotic Applications Using Simulation Software
    Kwak, Daniel
    Mikhail, Maged
    IEEE SOUTHEASTCON 2020, 2020,
  • [3] Portfolio solver for verifying Binarized Neural Networks
    Kovasznai, Gergely
    Gajdar, Krisztian
    Narodytska, Nina
    ANNALES MATHEMATICAE ET INFORMATICAE, 2021, 53 : 183 - 200
  • [4] Verifying Feedforward Neural Networks for Classification in Isabelle/HOL
    Brucker, Achim D.
    Stell, Amy
    FORMAL METHODS, FM 2023, 2023, 14000 : 427 - 444
  • [5] Verifying safety of neural networks from topological perspectives
    Liang, Zhen
    Ren, Dejin
    Xue, Bai
    Wang, Ji
    Yang, Wenjing
    Liu, Wanwei
    SCIENCE OF COMPUTER PROGRAMMING, 2024, 236
  • [6] Verifying and Interpreting Neural Networks Using Finite Automata
    Salzer, Marco
    Alsmann, Eric
    Bruse, Florian
    Lange, Martin
    DEVELOPMENTS IN LANGUAGE THEORY, DLT 2024, 2024, 14791 : 266 - 281
  • [7] A Review of Abstraction Methods Toward Verifying Neural Networks
    Boudardara, Fateh
    Boussif, Abderraouf
    Meyer, Pierre-Jean
    Ghazel, Mohamed
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2024, 23 (04)
  • [8] Techniques for modelling and verifying railway interlockings
    James, Phillip
    Moller, Faron
    Hoang Nga Nguyen
    Roggenbach, Markus
    Schneider, Steve
    Treharne, Helen
    INTERNATIONAL JOURNAL ON SOFTWARE TOOLS FOR TECHNOLOGY TRANSFER, 2014, 16 (06) : 685 - 711
  • [9] VeriCon: Towards Verifying Controller Programs in Software-Defined Networks
    Ball, Thomas
    Bjorner, Nikolaj
    Gember, Aaron
    Itzhaky, Shachar
    Karbyshev, Aleksandr
    Sagiv, Mooly
    Schapira, Michael
    Valadarsky, Asaf
    ACM SIGPLAN NOTICES, 2014, 49 (06) : 282 - 293
  • [10] Verifying Binarized Neural Networks by Angluin-Style Learning
    Shih, Andy
    Darwiche, Adnan
    Choi, Arthur
    THEORY AND APPLICATIONS OF SATISFIABILITY TESTING - SAT 2019, 2019, 11628 : 354 - 370