Deep Reinforcement Learning for Dynamic Stock Option Hedging: A Review

被引:2
作者
Pickard, Reilly [1 ]
Lawryshyn, Yuri [2 ]
机构
[1] Univ Toronto, Dept Mech & Ind Engn, Toronto, ON M5S 3G8, Canada
[2] Univ Toronto, Dept Chem Engn & Appl Chem, Toronto, ON M5S 3E5, Canada
关键词
reinforcement learning; neural networks; dynamic stock option hedging; quantitative finance; financial risk management; VOLATILITY;
D O I
10.3390/math11244943
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
This paper reviews 17 studies addressing dynamic option hedging in frictional markets through Deep Reinforcement Learning (DRL). Specifically, this work analyzes the DRL models, state and action spaces, reward formulations, data generation processes and results for each study. It is found that policy methods such as DDPG are more commonly employed due to their suitability for continuous action spaces. Despite diverse state space definitions, a lack of consensus exists on variable inclusion, prompting a call for thorough sensitivity analyses. Mean-variance metrics prevail in reward formulations, with episodic return, VaR and CvaR also yielding comparable results. Geometric Brownian motion is the primary data generation process, supplemented by stochastic volatility models like SABR (stochastic alpha, beta, rho) and the Heston model. RL agents, particularly those monitoring transaction costs, consistently outperform the Black-Scholes Delta method in frictional environments. Although consistent results emerge under constant and stochastic volatility scenarios, variations arise when employing real data. The lack of a standardized testing dataset or universal benchmark in the RL hedging space makes it difficult to compare results across different studies. A recommended future direction for this work is an implementation of DRL for hedging American options and an investigation of how DRL performs compared to other numerical American option hedging methods.
引用
收藏
页数:19
相关论文
共 75 条
[1]   Reinforcement Learning Algorithms: An Overview and Classification [J].
AlMahamid, Fadi ;
Grolinger, Katarina .
2021 IEEE CANADIAN CONFERENCE ON ELECTRICAL AND COMPUTER ENGINEERING (CCECE), 2021,
[2]   Deep Reinforcement Learning A brief survey [J].
Arulkumaran, Kai ;
Deisenroth, Marc Peter ;
Brundage, Miles ;
Bharath, Anil Anthony .
IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (06) :26-38
[3]   Optimizing hyperparameters of deep reinforcement learning for autonomous driving based on whale optimization algorithm [J].
Ashraf, Nesma M. ;
Mostafa, Reham R. ;
Sakr, Rasha H. ;
Rashad, M. Z. .
PLOS ONE, 2021, 16 (06)
[4]  
Assa H., 2021, Assessing Reinforcement Delta Hedging
[5]  
Atashbar T., 2022, IMF Working Papers, V2022
[6]  
Barth-Maron G, 2018, Arxiv, DOI [arXiv:1804.08617, DOI 10.48550/ARXIV.1804.08617]
[7]  
Bartlett B., 2006, Witmott Magazine, V24, P2
[8]  
Bellemare MG, 2017, Arxiv, DOI [arXiv:1707.06887, 10.48550/arXiv.1707.06887]
[9]   PRICING OF OPTIONS AND CORPORATE LIABILITIES [J].
BLACK, F ;
SCHOLES, M .
JOURNAL OF POLITICAL ECONOMY, 1973, 81 (03) :637-654
[10]   Reinforcement Learning, Fast and Slow [J].
Botvinick, Matthew ;
Ritter, Sam ;
Wang, Jane X. ;
Kurth-Nelson, Zeb ;
Blundell, Charles ;
Hassabis, Demis .
TRENDS IN COGNITIVE SCIENCES, 2019, 23 (05) :408-422