Learning Sparse Evidence-Driven Interpretation to Understand Deep Reinforcement Learning Agents

被引:2
作者
Dao, Giang [1 ]
Huff, Wesley Houston [1 ]
Lee, Minwoo [1 ]
机构
[1] Univ North Carolina Charlotte, Dept Comp Sci, Charlotte, NC 28223 USA
来源
2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021) | 2021年
关键词
explanation; sparsity; evidence-driven interpretation; reinforcement learning;
D O I
10.1109/SSCI50451.2021.9660192
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advances in machine learning require interpretability and explainability for reliable and trustworthy systems. However, explanations of machine learning models are often hard to achieve given the large amount of information from the complex machine learning models. Evidence-driven reinforcement learning provides snapshot images to understand the learning experiences and the learned behaviors; however, it requires human labor to analyze a large number of retrieved snapshot images. Imposing sparsity of the evidence collection process for interpretation is, thus, significant to make human interpretation easy. In this paper, we proposed novel sparse evidence collection methods to discarding less important images for interpretation. We discuss the trade-offs between the sparsity and re-approximation accuracy and the quality of evidence in different Atari game environments.
引用
收藏
页数:7
相关论文
共 50 条
  • [31] Transfer Learning in Deep Reinforcement Learning: A Survey
    Zhu, Zhuangdi
    Lin, Kaixiang
    Jain, Anil K.
    Zhou, Jiayu
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (11) : 13344 - 13362
  • [32] Goal- Driven Autonomous Exploration Through Deep Reinforcement Learning
    Cimurs, Reinis
    Suh, Il Hong
    Lee, Jin Han
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (02) : 730 - 737
  • [33] Bayesian Deep Reinforcement Learning via Deep Kernel Learning
    Xuan, Junyu
    Lu, Jie
    Yan, Zheng
    Zhang, Guangquan
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2019, 12 (01) : 164 - 171
  • [34] A Search-Based Testing Approach for Deep Reinforcement Learning Agents
    Zolfagharian, Amirhossein
    Abdellatif, Manel
    Briand, Lionel C.
    Bagherzadeh, Mojtaba
    Ramesh, S.
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2023, 49 (07) : 3715 - 3735
  • [35] A Cloud QoS-driven Scheduler based on Deep Reinforcement Learning
    Minh-Ngoc Tran
    Kim, Younghan
    12TH INTERNATIONAL CONFERENCE ON ICT CONVERGENCE (ICTC 2021): BEYOND THE PANDEMIC ERA WITH ICT CONVERGENCE INNOVATION, 2021, : 1823 - 1825
  • [36] Reinforcement learning-driven deep question generation with rich semantics
    Guan, Menghong
    Mondal, Subrota Kumar
    Dai, Hong-Ning
    Bao, Haiyong
    INFORMATION PROCESSING & MANAGEMENT, 2023, 60 (02)
  • [37] Data Driven Solution to Market Equilibrium via Deep Reinforcement Learning
    Wen, Lin
    Wang, Jianxiao
    Lin, Li
    Zou, Yang
    Gao, Feng
    Hong, Qiteng
    2024 IEEE 2ND INTERNATIONAL CONFERENCE ON POWER SCIENCE AND TECHNOLOGY, ICPST 2024, 2024, : 1422 - 1426
  • [38] Bayesian Deep Reinforcement Learning via Deep Kernel Learning
    Junyu Xuan
    Jie Lu
    Zheng Yan
    Guangquan Zhang
    International Journal of Computational Intelligence Systems, 2018, 12 : 164 - 171
  • [39] Advising reinforcement learning toward scaling agents in continuous control environments with sparse rewards
    Ren, Hailin
    Ben-Tzvi, Pinhas
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2020, 90
  • [40] Reinforcement Learning Interpretation Methods: A Survey
    Alharin, Alnour
    Doan, Thanh-Nam
    Sartipi, Mina
    IEEE ACCESS, 2020, 8 : 171058 - 171077