Verifying Learning-Based Robotic Navigation Systems

被引:7
作者
Amir, Guy [1 ]
Corsi, Davide [2 ]
Yerushalmi, Raz [1 ,3 ]
Marzari, Luca [2 ]
Harel, David [3 ]
Farinelli, Alessandro [2 ]
Katz, Guy [1 ]
机构
[1] Hebrew Univ Jerusalem, Jerusalem, Israel
[2] Univ Verona, Verona, Italy
[3] Weizmann Inst Sci, Rehovot, Israel
来源
TOOLS AND ALGORITHMS FOR THE CONSTRUCTION AND ANALYSIS OF SYSTEMS, PT I, TACAS 2023 | 2023年 / 13993卷
基金
以色列科学基金会;
关键词
REINFORCEMENT; VERIFICATION;
D O I
10.1007/978-3-031-30823-9_31
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Deep reinforcement learning (DRL) has become a dominant deep-learning paradigm for tasks where complex policies are learned within reactive systems. Unfortunately, these policies are known to be susceptible to bugs. Despite significant progress in DNN verification, there has been little work demonstrating the use of modern verification tools on real-world, DRL-controlled systems. In this case study, we attempt to begin bridging this gap, and focus on the important task of mapless robotic navigation - a classic robotics problem, in which a robot, usually controlled by a DRL agent, needs to efficiently and safely navigate through an unknown arena towards a target. We demonstrate how modern verification engines can be used for effective model selection, i.e., selecting the best available policy for the robot in question from a pool of candidate policies. Specifically, we use verification to detect and rule out policies that may demonstrate suboptimal behavior, such as collisions and infinite loops. We also apply verification to identify models with overly conservative behavior, thus allowing users to choose superior policies, which might be better at finding shorter paths to a target. To validate our work, we conducted extensive experiments on an actual robot, and confirmed that the suboptimal policies detected by our method were indeed flawed. We also demonstrate the superiority of our verification-driven approach over state-of-the-art, gradient attacks. Our work is the first to establish the usefulness of DNN verification in identifying and filtering out suboptimal DRL policies in real-world robots, and we believe that the methods presented here are applicable to a wide range of systems that incorporate deep-learning-based agents.
引用
收藏
页码:607 / 627
页数:21
相关论文
共 68 条
  • [1] Achiam J, 2017, PR MACH LEARN RES, V70
  • [2] Alshiekh M, 2018, AAAI CONF ARTIF INTE, P2669
  • [3] Amir Guy, 2021, 2021 Formal Methods in Computer Aided Design (FMCAD), P193, DOI 10.34727/2021/isbn.978-3-85448-046-4_28
  • [4] Amir G., 2022, Supplementary Video
  • [5] Amir G., 2022, arXiv
  • [6] Amir Guy, 2022, Zenodo, DOI 10.5281/ZENODO.7496352
  • [7] Turtlebot 3 as a Robotics Education Platform
    Amsters, Robin
    Slaets, Peter
    [J]. ROBOTICS IN EDUCATION: CURRENT RESEARCH AND INNOVATIONS, 2020, 1023 : 170 - 181
  • [8] Run-Time Optimization for Learned Controllers Through Quantitative Games
    Avni, Guy
    Bloem, Roderick
    Chatterjee, Krishnendu
    Henzinger, Thomas A.
    Konighofer, Bettina
    Pranger, Stefan
    [J]. COMPUTER AIDED VERIFICATION, CAV 2019, PT I, 2019, 11561 : 630 - 649
  • [9] Bacci E., 2021, P 30 INT JOINT C ART
  • [10] Quantitative Verification of Neural Networks and Its Security Applications
    Baluta, Teodora
    Shen, Shiqi
    Shinde, Shweta
    Meel, Kuldeep S.
    Saxena, Prateek
    [J]. PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 1249 - 1264