Model tree methods for explaining deep reinforcement learning agents in real-time robotic applications

被引:11
作者
Gjaerum, Vilde B. [1 ]
Strumke, Inga [2 ]
Lover, Jakob [3 ]
Miller, Timothy [4 ]
Lekkas, Anastasios M. [1 ]
机构
[1] Norwegian Univ Sci & Technol, Dept Engn Cybernet, N-7034 Trondheim, Norway
[2] Norwegian Univ Sci & Technol, Dept Comp Sci, N-7034 Trondheim, Norway
[3] Norwegian Univ Sci & Technol, Dept Engn Cybernet, N-7052 Trondheim, Norway
[4] Univ Melbourne, Sch Comp & Informat Syst, Melbourne, Vic 3010, Australia
关键词
Explainable artificial intelligence; Model trees; Reinforcement learning; Robotics;
D O I
10.1016/j.neucom.2022.10.014
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep reinforcement learning has shown useful in the field of robotics but the black-box nature of deep neural networks impedes the applicability of deep reinforcement learning agents for real-world tasks. This is addressed in the field of explainable artificial intelligence, by developing explanation methods that aim to explain such agents to humans. Model trees as surrogate models have proven useful for producing explanations for black-box models used in real-world robotic applications, in particular, due to their capability of providing explanations in real time. In this paper, we provide an overview and analysis of available methods for building model trees for explaining deep reinforcement learning agents solving robotics tasks. We find that multiple outputs are important for the model to be able to grasp the dependencies of coupled output features, i.e. actions. Additionally, our results indicate that introducing domain knowledge via a hierarchy among the input features during the building process results in higher accuracies and a faster building process. (c) 2022 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
引用
收藏
页码:133 / 144
页数:12
相关论文
共 50 条
  • [41] Methods and Program Tools Based on Prediction and Reinforcement Learning for the Intelligent Decision Support Systems of Real-Time
    Eremeev, A. P.
    Kozhukhov, A. A.
    PROCEEDINGS OF THE SECOND INTERNATIONAL SCIENTIFIC CONFERENCE INTELLIGENT INFORMATION TECHNOLOGIES FOR INDUSTRY (IITI'17), VOL 1, 2018, 679 : 74 - 83
  • [42] Real Time Path Planning of Robot using Deep Reinforcement Learning
    Raajan, Jeevan
    Srihari, P., V
    Satya, Jayadev P.
    Bhikkaji, B.
    Pasumarthy, Ramkrishna
    IFAC PAPERSONLINE, 2020, 53 (02): : 15602 - 15607
  • [43] English synchronous real-time translation method based on reinforcement learning
    Ke, Xin
    WIRELESS NETWORKS, 2024, 30 (05) : 4167 - 4179
  • [44] Reinforcement learning to achieve real-time control of triple inverted pendulum
    Baek, Jongchan
    Lee, Changhyeon
    Lee, Young Sam
    Jeon, Soo
    Han, Soohee
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 128
  • [45] Avoiding Obstacles via Missile Real-Time Inference by Reinforcement Learning
    Hong, Daseon
    Park, Sungsu
    APPLIED SCIENCES-BASEL, 2022, 12 (09):
  • [46] Real-Time Mitigation of Loss of Separation Events using Reinforcement Learning
    Hawley, Megan
    Bharadwaj, Raj
    Venkataraman, Vijay
    2019 IEEE/AIAA 38TH DIGITAL AVIONICS SYSTEMS CONFERENCE (DASC), 2019,
  • [47] Real-Time Optimal Energy Management of Electrified Powertrains with Reinforcement Learning
    Biswas, Atriya
    Anselma, Pier G.
    Emadi, Ali
    2019 IEEE TRANSPORTATION ELECTRIFICATION CONFERENCE AND EXPO (ITEC), 2019,
  • [48] Considerations of Reinforcement Learning within Real-Time Wireless Communication Systems
    Jones, Alyse M.
    Headley, William C.
    2022 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM), 2022,
  • [49] Energy Management of Networked Microgrids With Real-Time Pricing by Reinforcement Learning
    Cui, Gaochen
    Jia, Qing-Shan
    Guan, Xiaohong
    IEEE TRANSACTIONS ON SMART GRID, 2024, 15 (01) : 570 - 580
  • [50] Real-time scheduling for a smart factory using a reinforcement learning approach
    Shiue, Yeou-Ren
    Lee, Ken-Chuan
    Su, Chao-Ton
    COMPUTERS & INDUSTRIAL ENGINEERING, 2018, 125 : 604 - 614