Improving Robustness via Risk Averse Distributional Reinforcement Learning

被引:0
作者
Singh, Rahul [1 ]
Zhang, Qinsheng [1 ]
Chen, Yongxin [1 ]
机构
[1] Georgia Inst Technol, Sch Aerosp Engn, Atlanta, GA 30332 USA
来源
LEARNING FOR DYNAMICS AND CONTROL, VOL 120 | 2020年 / 120卷
关键词
Risk sensitive control; reinforcement learning; distributional reinforcement learning; robust reinforcement learning; TIME;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
One major obstacle that precludes the success of reinforcement learning in real-world applications is the lack of robustness, either to model uncertainties or external disturbances, of the trained policies. Robustness is critical when the policies are trained in simulations instead of real world environment. In this work, we propose a risk-aware algorithm to learn robust policies in order to bridge the gap between simulation training and real-world implementation. Our algorithm is based on recently discovered distributional RL framework. We incorporate CVaR risk measure in sample based distributional policy gradients (SDPG) for learning risk-averse policies to achieve robustness against a range of system disturbances. We validate the robustness of risk-aware SDPG on multiple environments.
引用
收藏
页码:958 / 968
页数:11
相关论文
共 34 条
  • [1] [Anonymous], 2010, PROC 27 INT C MACH L
  • [2] Barth-Maron Gabriel, 2018, INT C LEARN REPR ICL
  • [3] Bellemare MG, 2017, PR MACH LEARN RES, V70
  • [4] DYNAMIC PROGRAMMING
    BELLMAN, R
    [J]. SCIENCE, 1966, 153 (3731) : 34 - &
  • [5] Brockman G, 2016, Arxiv, DOI arXiv:1606.01540
  • [6] Chow Y., 2014, Advances in neural information processing systems, P3509
  • [7] Chow Yinlam, 2015, Advances in Neural Information Processing Systems, V28
  • [8] Risk-sensitive and minimax control of discrete-time, finite-state Markov decision processes
    Coraluppi, SP
    Marcus, SI
    [J]. AUTOMATICA, 1999, 35 (02) : 301 - 309
  • [9] Dabney W, 2018, PR MACH LEARN RES, V80
  • [10] Dabney W, 2018, AAAI CONF ARTIF INTE, P2892