Risk-Sensitive Reinforcement Learning via Policy Gradient Search

被引:15
作者
Prashanth, L. A. [1 ]
Fu, Michael C. [2 ]
机构
[1] Indian Inst Technol Madras, Chennai, Tamil Nadu, India
[2] Univ Maryland, College Pk, MD 20742 USA
来源
FOUNDATIONS AND TRENDS IN MACHINE LEARNING | 2022年 / 15卷 / 05期
关键词
MARKOV DECISION-PROCESSES; ACTOR-CRITIC ALGORITHM; STOCHASTIC-APPROXIMATION; PROSPECT-THEORY; DISCRETE-TIME; NEUTRAL/MINIMAX CONTROL; CONVERGENCE RATE; OPTIMIZATION; UTILITY; COST;
D O I
10.1561/2200000091
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The objective in a traditional reinforcement learning (RL) problem is to find a policy that optimizes the expected value of a performance metric such as the infinite-horizon cumulative discounted or long-run average cost/reward. In practice, optimizing the expected value alone may not be satisfactory, in that it may be desirable to incorporate the notion of risk into the optimization problem formulation, either in the objective or as a constraint. Various risk measures have been proposed in the literature, e.g., exponential utility, variance, percentile performance, chance constraints, value at risk (quantile), conditional value-at-risk, prospect theory and its later enhancement, cumulative prospect theory. In this monograph, we consider risk-sensitive RL in two settings: one where the goal is to find a policy that optimizes the usual expected value objective while ensuring that a risk constraint is satisfied, and the other where the risk measure is the objective. We survey some of the recent work in this area specifically where policy gradient search is the solution approach. In the first risk-sensitive RL setting, we cover popular risk measures based on variance, conditional valueat-risk, and chance constraints, and present a template for policy gradient-based risk-sensitive RL algorithms using a Lagrangian formulation. For the setting where risk is incorporated directly into the objective function, we consider an exponential utility formulation, cumulative prospect theory, and coherent risk measures. This non-exhaustive survey aims to give a flavor of the challenges involved in solving risk-sensitive RL problems using policy gradient methods, as well as outlining some potential future research directions.
引用
收藏
页码:537 / 693
页数:157
相关论文
共 50 条
[41]   A sensitivity formula for risk-sensitive cost and the actor-critic algorithm [J].
Borkar, VS .
SYSTEMS & CONTROL LETTERS, 2001, 44 (05) :339-346
[42]   Local Optimization Policy for Link Prediction via Reinforcement Learning [J].
Nie, Mingshuo ;
Chen, Dongming ;
Wang, Dongqi ;
Chen, Huilin .
IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2025, 12 (02) :1224-1236
[43]   QSOD: Hybrid Policy Gradient for Deep Multi-agent Reinforcement Learning [J].
Rehman, Hafiz Muhammad Raza Ur ;
On, Byung-Won ;
Ningombam, Devarani Devi ;
Yi, Sungwon ;
Choi, Gyu Sang .
IEEE ACCESS, 2021, 9 :129728-129741
[44]   Gradient Monitored Reinforcement Learning [J].
Abdul Hameed, Mohammed Sharafath ;
Chadha, Gavneet Singh ;
Schwung, Andreas ;
Ding, Steven X. .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (08) :4106-4119
[45]   Gradient dynamics in reinforcement learning [J].
Fabbricatore, Riccardo ;
V. Palyulin, Vladimir .
PHYSICAL REVIEW E, 2022, 106 (02)
[46]   Risk-Sensitive Piecewise-Linear Policy Iteration for Stochastic Shortest Path Markov Decision Processes [J].
Pastor, Henrique Dias ;
Borges, Igor Oliveira ;
Freire, Valdinei ;
Delgado, Karina Valdivia ;
de Barros, Leliane Nunes .
ADVANCES IN SOFT COMPUTING, MICAI 2020, PT I, 2020, 12468 :383-395
[47]   Evolutionary reinforcement learning via cooperative coevolutionary negatively correlated search [J].
Yang, Peng ;
Zhang, Hu ;
Yu, Yanglong ;
Li, Mingjia ;
Tang, Ke .
SWARM AND EVOLUTIONARY COMPUTATION, 2022, 68
[48]   THE RISK-SENSITIVE POISSON EQUATION FOR A COMMUNICATING MARKOV CHAIN ON A DENUMERABLE STATE SPACE [J].
Cavazos-Cadena, Rolando .
KYBERNETIKA, 2009, 45 (05) :716-736
[49]   Small noise methods for risk-sensitive/robust economies [J].
Anderson, Evan W. ;
Hansen, Lars Peter ;
Sargent, Thomas J. .
JOURNAL OF ECONOMIC DYNAMICS & CONTROL, 2012, 36 (04) :468-500
[50]   Verification of Markov Decision Processes with Risk-Sensitive Measures [J].
Cubuktepe, Murat ;
Topcu, Ufuk .
2018 ANNUAL AMERICAN CONTROL CONFERENCE (ACC), 2018, :2371-2377