Modeling limit order trading with a continuous action policy for deep reinforcement learning

被引:4
|
作者
Tsantekidis, Avraam [1 ]
Passalis, Nikolaos [1 ]
Tefas, Anastasios [1 ]
机构
[1] Aristotle Univ Thessaloniki, Sch Informat, Thessaloniki, Greece
关键词
Financial trading; Limit orders; Policy gradient; Deep reinforcement learning; PREDICTION; MARKET;
D O I
10.1016/j.neunet.2023.05.051
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Limit Orders allow buyers and sellers to set a "limit price"they are willing to accept in a trade. On the other hand, market orders allow for immediate execution at any price. Thus, market orders are susceptible to slippage, which is the additional cost incurred due to the unfavorable execution of a trade order. As a result, limit orders are often preferred, since they protect traders from excessive slippage costs due to larger than expected price fluctuations. Despite the price guarantees of limit orders, they are more complex compared to market orders. Orders with overly optimistic limit prices might never be executed, which increases the risk of employing limit orders in Machine Learning (ML)-based trading systems. Indeed, the current ML literature for trading almost exclusively relies on market orders. To overcome this limitation, a Deep Reinforcement Learning (DRL) approach is proposed to model trading agents that use limit orders. The proposed method (a) uses a framework that employs a continuous probability distribution to model limit prices, while (b) provides the ability to place market orders when the risk of no execution is more significant than the cost of slippage. Extensive experiments are conducted with multiple currency pairs, using hourly price intervals, validating the effectiveness of the proposed method and paving the way for introducing limit order modeling in DRL-based trading.& COPY; 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页码:506 / 515
页数:10
相关论文
共 50 条
  • [1] Algorithmic trading using continuous action space deep reinforcement learning
    Majidi, Naseh
    Shamsi, Mahdi
    Marvasti, Farokh
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 235
  • [2] Hierarchical Deep Reinforcement Learning for Continuous Action Control
    Yang, Zhaoyang
    Merrick, Kathryn
    Jin, Lianwen
    Abbass, Hussein A.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (11) : 5174 - 5184
  • [3] Soft Action Particle Deep Reinforcement Learning for a Continuous Action Space
    Kang, Minjae
    Lee, Kyungjae
    Oh, Songhwai
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 5028 - 5033
  • [4] Deep Reinforcement Learning in Continuous Action Spaces for Pair Trading: A Comparative Study of A2 C and PPO
    Cristian Quintero
    Diego Leon
    Javier Sandoval
    German Hernandez
    SN Computer Science, 6 (5)
  • [5] Limit Action Space to Enhance Drone Control with Deep Reinforcement Learning
    Jang, Sooyoung
    Park, Noh-Sam
    11TH INTERNATIONAL CONFERENCE ON ICT CONVERGENCE: DATA, NETWORK, AND AI IN THE AGE OF UNTACT (ICTC 2020), 2020, : 1212 - 1215
  • [6] Market Making with Deep Reinforcement Learning from Limit Order Books
    Guo, Hong
    Lin, Jianwu
    Huang, Fanlin
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [7] Rules Based Policy for Stock Trading: A New Deep Reinforcement Learning Method
    Badr, Hirchoua
    Ouhbi, Brahim
    Frikh, Bouchra
    PROCEEDINGS OF 2020 5TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING AND ARTIFICIAL INTELLIGENCE: TECHNOLOGIES AND APPLICATIONS (CLOUDTECH'20), 2020, : 61 - 66
  • [8] Deep reinforcement learning in continuous action space for autonomous robotic surgery
    Shahkoo, Amin Abbasi
    Abin, Ahmad Ali
    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, 2023, 18 (03) : 423 - 431
  • [9] Policy ensemble gradient for continuous control problems in deep reinforcement learning
    Liu, Guoqiang
    Chen, Gang
    Huang, Victoria
    NEUROCOMPUTING, 2023, 548
  • [10] Multi-Task Deep Reinforcement Learning for Continuous Action Control
    Yang, Zhaoyang
    Merrick, Kathryn
    Abbass, Hussein
    Jin, Lianwen
    PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 3301 - 3307