Modeling limit order trading with a continuous action policy for deep reinforcement learning

被引:4
|
作者
Tsantekidis, Avraam [1 ]
Passalis, Nikolaos [1 ]
Tefas, Anastasios [1 ]
机构
[1] Aristotle Univ Thessaloniki, Sch Informat, Thessaloniki, Greece
关键词
Financial trading; Limit orders; Policy gradient; Deep reinforcement learning; PREDICTION; MARKET;
D O I
10.1016/j.neunet.2023.05.051
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Limit Orders allow buyers and sellers to set a "limit price"they are willing to accept in a trade. On the other hand, market orders allow for immediate execution at any price. Thus, market orders are susceptible to slippage, which is the additional cost incurred due to the unfavorable execution of a trade order. As a result, limit orders are often preferred, since they protect traders from excessive slippage costs due to larger than expected price fluctuations. Despite the price guarantees of limit orders, they are more complex compared to market orders. Orders with overly optimistic limit prices might never be executed, which increases the risk of employing limit orders in Machine Learning (ML)-based trading systems. Indeed, the current ML literature for trading almost exclusively relies on market orders. To overcome this limitation, a Deep Reinforcement Learning (DRL) approach is proposed to model trading agents that use limit orders. The proposed method (a) uses a framework that employs a continuous probability distribution to model limit prices, while (b) provides the ability to place market orders when the risk of no execution is more significant than the cost of slippage. Extensive experiments are conducted with multiple currency pairs, using hourly price intervals, validating the effectiveness of the proposed method and paving the way for introducing limit order modeling in DRL-based trading.& COPY; 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页码:506 / 515
页数:10
相关论文
共 50 条
  • [31] Urban network geofencing with dynamic speed limit policy via deep reinforcement learning
    Lu, Wenqi
    Yi, Ziwei
    Gidofalvi, Gyozo
    Simoni, Michele D.
    Rui, Yikang
    Ran, Bin
    TRANSPORTATION RESEARCH PART A-POLICY AND PRACTICE, 2024, 183
  • [32] Continuous time trading of a small investor in a limit order market
    Kuehn, Christoph
    Stroh, Maximilian
    STOCHASTIC PROCESSES AND THEIR APPLICATIONS, 2013, 123 (06) : 2011 - 2053
  • [33] Stock Trading Strategies Based on Deep Reinforcement Learning
    Li, Yawei
    Liu, Peipei
    Wang, Ze
    SCIENTIFIC PROGRAMMING, 2022, 2022
  • [34] Optimizing Automated Trading Systems with Deep Reinforcement Learning
    Tran, Minh
    Pham-Hi, Duc
    Bui, Marc
    ALGORITHMS, 2023, 16 (01)
  • [35] Deep reinforcement learning with positional context for intraday trading
    Goluza, Sven
    Kovacevic, Tomislav
    Bauman, Tessa
    Kostanjcar, Zvonko
    EVOLVING SYSTEMS, 2024, 15 (05) : 1865 - 1880
  • [36] Cryptocurrency Trading Agent Using Deep Reinforcement Learning
    Suliman, Uwais
    van Zyl, Terence L.
    Paskaramoorthy, Andrew
    2022 9TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING & MACHINE INTELLIGENCE, ISCMI, 2022, : 6 - 10
  • [37] Application of Deep Reinforcement Learning on Automated Stock Trading
    Chen, Lin
    Gao, Qiang
    PROCEEDINGS OF 2019 IEEE 10TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING AND SERVICE SCIENCE (ICSESS 2019), 2019, : 29 - 33
  • [38] A Stock Trading Strategy Based on Deep Reinforcement Learning
    Khemlichi, Firdaous
    Chougrad, Hiba
    Khamlichi, Youness Idrissi
    El Boushaki, Abdessamad
    Ben Ali, Safae El Haj
    ADVANCED INTELLIGENT SYSTEMS FOR SUSTAINABLE DEVELOPMENT (AI2SD'2020), VOL 2, 2022, 1418 : 920 - 928
  • [39] Deep Robust Reinforcement Learning for Practical Algorithmic Trading
    Li, Yang
    Zheng, Wanshan
    Zheng, Zibin
    IEEE ACCESS, 2019, 7 : 108014 - 108022
  • [40] Improving exploration in deep reinforcement learning for stock trading
    Zemzem, Wiem
    Tagina, Moncef
    INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS IN TECHNOLOGY, 2023, 72 (04) : 288 - 295