Modeling limit order trading with a continuous action policy for deep reinforcement learning

被引:4
|
作者
Tsantekidis, Avraam [1 ]
Passalis, Nikolaos [1 ]
Tefas, Anastasios [1 ]
机构
[1] Aristotle Univ Thessaloniki, Sch Informat, Thessaloniki, Greece
关键词
Financial trading; Limit orders; Policy gradient; Deep reinforcement learning; PREDICTION; MARKET;
D O I
10.1016/j.neunet.2023.05.051
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Limit Orders allow buyers and sellers to set a "limit price"they are willing to accept in a trade. On the other hand, market orders allow for immediate execution at any price. Thus, market orders are susceptible to slippage, which is the additional cost incurred due to the unfavorable execution of a trade order. As a result, limit orders are often preferred, since they protect traders from excessive slippage costs due to larger than expected price fluctuations. Despite the price guarantees of limit orders, they are more complex compared to market orders. Orders with overly optimistic limit prices might never be executed, which increases the risk of employing limit orders in Machine Learning (ML)-based trading systems. Indeed, the current ML literature for trading almost exclusively relies on market orders. To overcome this limitation, a Deep Reinforcement Learning (DRL) approach is proposed to model trading agents that use limit orders. The proposed method (a) uses a framework that employs a continuous probability distribution to model limit prices, while (b) provides the ability to place market orders when the risk of no execution is more significant than the cost of slippage. Extensive experiments are conducted with multiple currency pairs, using hourly price intervals, validating the effectiveness of the proposed method and paving the way for introducing limit order modeling in DRL-based trading.& COPY; 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页码:506 / 515
页数:10
相关论文
共 50 条
  • [21] Policy Optimization for Continuous Reinforcement Learning
    Zhao, Hanyang
    Tang, Wenpin
    Yao, David D.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [22] Reinforcement learning in continuous action spaces
    van Hasselt, Hado
    Wiering, Marco A.
    2007 IEEE INTERNATIONAL SYMPOSIUM ON APPROXIMATE DYNAMIC PROGRAMMING AND REINFORCEMENT LEARNING, 2007, : 272 - +
  • [23] Continuous control of structural vibrations using hybrid deep reinforcement learning policy
    Panda, Jagajyoti
    Chopra, Mudit
    Matsagar, Vasant
    Chakraborty, Souvik
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 252
  • [24] Adversarial Attacks on Multiagent Deep Reinforcement Learning Models in Continuous Action Space
    Zhou, Ziyuan
    Liu, Guanjun
    Guo, Weiran
    Zhou, MengChu
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2024, 54 (12): : 7633 - 7646
  • [25] Continuous action deep reinforcement learning for propofol dosing during general anesthesia
    Schamberg, Gabriel
    Badgeley, Marcus
    Meschede-Krasa, Benyamin
    Kwon, Ohyoon
    Brown, Emery N.
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2022, 123
  • [26] The use of continuous action representations to scale deep reinforcement learning for inventory control
    Vanvuchelen, Nathalie
    De Moor, Bram J.
    Boute, Robert N.
    IMA JOURNAL OF MANAGEMENT MATHEMATICS, 2024, 36 (01) : 51 - 66
  • [27] Obstacle Avoidance for UAS in Continuous Action Space Using Deep Reinforcement Learning
    Hu, Jueming
    Yang, Xuxi
    Wang, Weichang
    Wei, Peng
    Ying, Lei
    Liu, Yongming
    IEEE ACCESS, 2022, 10 : 90623 - 90634
  • [28] Reinforcement Learning Equilibrium in Limit Order Markets
    He, Xue-Zhong
    Lin, Shen
    JOURNAL OF ECONOMIC DYNAMICS & CONTROL, 2022, 144
  • [29] Active Exploration Deep Reinforcement Learning for Continuous Action Space with Forward Prediction
    Zhao, Dongfang
    Huanshi, Xu
    Xun, Zhang
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2024, 17 (01)
  • [30] Active Exploration Deep Reinforcement Learning for Continuous Action Space with Forward Prediction
    Dongfang Zhao
    Xu Huanshi
    Zhang Xun
    International Journal of Computational Intelligence Systems, 17