Efficient Off-Policy Safe Reinforcement Learning Using Trust Region Conditional Value At Risk

被引:9
|
作者
Kim, Dohyeong [1 ,2 ]
Oh, Songhwai [1 ,2 ]
机构
[1] Seoul Natl Univ, Dept Elect & Comp Engn, Seoul 08826, South Korea
[2] Seoul Natl Univ, ASRI, Seoul 08826, South Korea
基金
新加坡国家研究基金会;
关键词
Reinforcement learning; robot safety; collision avoidance;
D O I
10.1109/LRA.2022.3184793
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
This letter aims to solve a safe reinforcement learning (RL) problem with risk measure-based constraints. As risk measures, such as conditional value at risk (CVaR), focus on the tail distribution of cost signals, constraining risk measures can effectively prevent a failure in the worst case. An on-policy safe RL method, called TRC, deals with a CVaR-constrained RL problem using a trust region method and can generate policies with almost zero constraint violations with high returns. However, to achieve outstanding performance in complex environments and satisfy safety constraints quickly, RL methods are required to be sample efficient. To this end, we propose an off-policy safe RL method with CVaR constraints, called off-policy TRC. If off-policy data from replay buffers is directly used to train TRC, the estimation error caused by the distributional shift results in performance degradation. To resolve this issue, we propose novel surrogate functions, in which the effect of the distributional shift can be reduced, and introduce an adaptive trust-region constraint to ensure a policy not to deviate far from replay buffers. The proposed method has been evaluated in simulation and real-world environments and satisfied safety constraints within a few steps while achieving high returns even in complex robotic tasks.
引用
收藏
页码:7644 / 7651
页数:8
相关论文
共 50 条
  • [41] Optimal model-free output synchronization of heterogeneous systems using off-policy reinforcement learning
    Modares, Hamidreza
    Nageshrao, Subramanya P.
    Lopes, Gabriel A. Delgado
    Babuska, Robert
    Lewis, Frank L.
    AUTOMATICA, 2016, 71 : 334 - 341
  • [42] Off-Policy Meta-Reinforcement Learning With Belief-Based Task Inference
    Imagawa, Takahisa
    Hiraoka, Takuya
    Tsuruoka, Yoshimasa
    IEEE ACCESS, 2022, 10 : 49494 - 49507
  • [43] A General Technique to Combine Off-Policy Reinforcement Learning Algorithms with Satellite Attitude Control
    Zhang, Jian
    Wu, Fengge
    Zhao, Junsuo
    Xu, Fanjiang
    PROCEEDINGS OF 2019 CHINESE INTELLIGENT AUTOMATION CONFERENCE, 2020, 586 : 709 - 719
  • [44] Off-policy correction algorithm for double Q network based on deep reinforcement learning
    Zhang, Qingbo
    Liu, Manlu
    Wang, Heng
    Qian, Weimin
    Zhang, Xinglang
    IET CYBER-SYSTEMS AND ROBOTICS, 2023, 5 (04)
  • [45] Model-Based Off-Policy Deep Reinforcement Learning With Model-Embedding
    Tan, Xiaoyu
    Qu, Chao
    Xiong, Junwu
    Zhang, James
    Qiu, Xihe
    Jin, Yaochu
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, 8 (04): : 2974 - 2986
  • [46] Optimal Control of Iron-Removal Systems Based on Off-Policy Reinforcement Learning
    Chen, Ning
    Luo, Shuhan
    Dai, Jiayang
    Luo, Biao
    Gui, Weihua
    IEEE ACCESS, 2020, 8 (08): : 149730 - 149740
  • [47] Off-policy reinforcement learning algorithm for robust optimal control of uncertain nonlinear systems
    Amirparast, Ali
    Kamal Hosseini Sani, S.
    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2024, 34 (08) : 5419 - 5437
  • [48] A MULTIAGENT REINFORCEMENT LEARNING FRAMEWORK FOR OFF-POLICY EVALUATION IN TWO-SIDED MARKETS
    Shi, Chengchun
    Wan, Runzhe
    Song, Ge
    Luo, Shikai
    Zhu, Hongtu
    Song, Rui
    ANNALS OF APPLIED STATISTICS, 2023, 17 (04) : 2701 - 2722
  • [49] Optimal robust online tracking control for space manipulator in task space using off-policy reinforcement learning
    Zhuang, Hongji
    Zhou, Hang
    Shen, Qiang
    Wu, Shufan
    Razoumny, Vladimir Yu.
    Razoumny, Yury N.
    AEROSPACE SCIENCE AND TECHNOLOGY, 2024, 153
  • [50] Identification and off-policy learning of multiple objectives using adaptive clustering
    Karimpanal, Thommen George
    Wilhelm, Erik
    NEUROCOMPUTING, 2017, 263 : 39 - 47