Visual Sparse Bayesian Reinforcement Learning: A Framework for Interpreting What an Agent Has Learned

被引:0
|
作者
Mishra, Indrajeet [1 ]
Dao, Giang [1 ]
Lee, Minwoo [1 ]
机构
[1] Univ N Carolina, Dept Comp Sci, Charlotte, NC 28223 USA
来源
2018 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI) | 2018年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a Visual Sparse Bayesian Reinforcement Learning (V-SBRL) framework for recording the images of the most important memories from the past experience. The key idea of this paper is to maintain an image snapshot storage to help understanding and analyzing the learned policy. In the extended framework of SBRL [1], the agent perceives the environment as the image state inputs, encodes the image into feature vectors, train SBRL module and stores the raw images. In this process, the snapshot storage keeps only the relevant memories which are important to make future decisions and discards the not-so-important memories. The stored snapshot images enable us to understand the agent's learning process by visualizing them. They also provide explanation of exploited policy in different conditions. A navigation task with static obstacles is examined for snapshot analysis.
引用
收藏
页码:1427 / 1434
页数:8
相关论文
共 50 条
  • [1] What exactly is learned in visual statistical learning? Insights from Bayesian modeling
    Siegelman, Noam
    Bogaerts, Louisa
    Armstrong, Blair C.
    Frost, Ram
    COGNITION, 2019, 192
  • [2] What has to be learned in motor learning?
    Bekkering, H
    Heck, D
    Sultan, F
    BEHAVIORAL AND BRAIN SCIENCES, 1996, 19 (03) : 436 - &
  • [3] A parallel framework for Bayesian reinforcement learning
    Barrett, Enda
    Duggan, Jim
    Howley, Enda
    CONNECTION SCIENCE, 2014, 26 (01) : 7 - 23
  • [4] Sparse Bayesian learning for efficient visual tracking
    Williams, O
    Blake, A
    Cipolla, R
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2005, 27 (08) : 1292 - 1304
  • [5] What can we learn from what a machine has learned? Interpreting credit risk machine learning models
    Bharodia, Nehalkumar
    Chen, Wei
    JOURNAL OF RISK MODEL VALIDATION, 2021, 15 (02): : 1 - 22
  • [6] Bayesian Reinforcement Learning via Deep, Sparse Sampling
    Grover, Divya
    Basu, Debabrota
    Dimitrakakis, Christos
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 3036 - 3044
  • [7] What has action learning learned to become?
    Pedler, Mike
    Burgoyne, John
    Brook, Cheryl
    ACTION LEARNING, 2005, 2 (01): : 49 - 68
  • [8] What the Bayesian framework has contributed to understanding cognition: Causal learning as a case study
    Holyoak, Keith J.
    Lu, Hongjing
    BEHAVIORAL AND BRAIN SCIENCES, 2011, 34 (04) : 203 - +
  • [9] Robust Bayesian Inverse Reinforcement Learning with Sparse Behavior Noise
    Zheng, Jiangchuan
    Liu, Siyuan
    Ni, Lionel M.
    PROCEEDINGS OF THE TWENTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2014, : 2198 - 2205
  • [10] Inferential Induction: A Novel Framework for Bayesian Reinforcement Learning
    Jorge, Emilio
    Eriksson, Hannes
    Dimitrakakis, Christos
    Basu, Debabrota
    Grover, Divya
    NEURIPS WORKSHOPS, 2020, 2020, 137 : 43 - 52