Market Making Strategy Optimization via Deep Reinforcement Learning

被引:5
作者
Sun, Tianyuan [1 ]
Huang, Dechun [1 ]
Yu, Jie [1 ]
机构
[1] Hohai Univ, Business Sch, Nanjing 211100, Peoples R China
来源
IEEE ACCESS | 2022年 / 10卷
关键词
Reinforcement learning; Adaptation models; Neural networks; Deep learning; Optimization; Stock markets; Engines; Deep reinforcement learning; LSTM; market making; stock market; LEVEL; LIMIT;
D O I
10.1109/ACCESS.2022.3143653
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Optimization of market making strategy is a vital issue for participants in security markets. Traditional strategies are mostly designed manually, and orders are mechanically issued according to rules based on predefined market conditions. On one hand, market conditions cannot be well represented by arbitrarily defined indicators, and on the other hand, rule-based strategies cannot fully capture relations between the market conditions and strategies' actions. Therefore, it is worthwhile to investigate how to incorporate deep reinforcement learning model to address those issues. In this paper, we propose an end-to-end deep reinforcement learning market making model, i.e., Deep Reinforcement Learning Market Making. It exploits long short-term memory network to extract temporal patterns of the market directly from limit order books, and it learns state-action relations via a reinforcement learning approach. In order to control inventory risk and information asymmetry, a deep Q-network is introduced to adaptively select different action subsets and train the market making agent according to the inventory states. Experiments are conducted on a six-month Level-2 data set, including 10 stock, from Shanghai Stock Exchange in China. Our model is compared with a conventional market making baseline and a state-of-the-art market making model. Experimental results show that our approach outperforms the benchmarks over 10 stocks by at least 10.63%.
引用
收藏
页码:9085 / 9093
页数:9
相关论文
共 23 条
[1]   High-frequency trading in a limit order book [J].
Avellaneda, Marco ;
Stoikov, Sasha .
QUANTITATIVE FINANCE, 2008, 8 (03) :217-224
[2]  
Chan N. T., 2001, P C SOC COMP EC JAN, P1
[3]  
Chen L, 2019, INT CONF SOFTW ENG, P29, DOI [10.1109/ICSESS47205.2019.9040728, 10.1109/icsess47205.2019.9040728]
[4]   A learning market-maker in the Glosten-Milgrom model [J].
Das, S .
QUANTITATIVE FINANCE, 2005, 5 (02) :169-180
[5]   Deep Direct Reinforcement Learning for Financial Signal Representation and Trading [J].
Deng, Yue ;
Bao, Feng ;
Kong, Youyong ;
Ren, Zhiquan ;
Dai, Qionghai .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2017, 28 (03) :653-664
[6]  
Ganesh S., 2019, ARXIV PREPRINT ARXIV
[7]   Market Making With Signals Through Deep Reinforcement Learning [J].
Gasperov, Bruno ;
Kostanjcar, Zvonko .
IEEE ACCESS, 2021, 9 :61611-61622
[8]   BID, ASK AND TRANSACTION PRICES IN A SPECIALIST MARKET WITH HETEROGENEOUSLY INFORMED TRADERS [J].
GLOSTEN, LR ;
MILGROM, PR .
JOURNAL OF FINANCIAL ECONOMICS, 1985, 14 (01) :71-100
[9]  
Gueant Olivier, 2019, Applied Mathematical Finance, V26, P388
[10]   Optimal high-frequency trading with limit and market orders [J].
Guilbaud, Fabien ;
Huyen Pham .
QUANTITATIVE FINANCE, 2013, 13 (01) :79-94