MOT: A Mixture of Actors Reinforcement Learning Method by Optimal Transport for Algorithmic Trading

被引:0
作者
Cheng, Xi [1 ]
Zhang, Jinghao [1 ]
Zeng, Yunan [1 ]
Xue, Wenfang [1 ]
机构
[1] Chinese Acad Sci, Inst Automat, Beijing, Peoples R China
来源
ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PT IV, PAKDD 2024 | 2024年 / 14648卷
基金
中国国家自然科学基金;
关键词
Algorithmic trading; Reinforcement learning; Optimal transport;
D O I
10.1007/978-981-97-2238-9_3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Algorithmic trading refers to executing buy and sell orders for specific assets based on automatically identified trading opportunities. Strategies based on reinforcement learning (RL) have demonstrated remarkable capabilities in addressing algorithmic trading problems. However, the trading patterns differ among market conditions due to shifted distribution data. Ignoring multiple patterns in the data will undermine the performance of RL. In this paper, we propose MOT, which designs multiple actors with disentangled representation learning to model the different patterns of the market. Furthermore, we incorporate the Optimal Transport (OT) algorithm to allocate samples to the appropriate actor by introducing a regularization loss term. Additionally, we propose Pretrain Module to facilitate imitation learning by aligning the outputs of actors with expert strategy and better balance the exploration and exploitation of RL. Experimental results on real futures market data demonstrate that MOT exhibits excellent profit capabilities while balancing risks. Ablation studies validate the effectiveness of the components of MOT.
引用
收藏
页码:30 / 42
页数:13
相关论文
共 50 条
  • [21] Algorithmic Trading Behavior Identification using Reward Learning Method
    Yang, Steve Y.
    Qiao, Qifeng
    Beling, Peter A.
    Scherer, William T.
    PROCEEDINGS OF THE 2014 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2014, : 3807 - 3814
  • [22] A novel deep reinforcement learning framework with BiLSTM-Attention networks for algorithmic trading
    Huang, Yuling
    Wan, Xiaoxiao
    Zhang, Lin
    Lu, Xiaoping
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 240
  • [23] An Optimal-Transport-Based Reinforcement Learning Approach for Computation Offloading
    Li, Zhuo
    Zhou, Xu
    Li, Taixin
    Liu, Yang
    2021 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2021,
  • [24] Algorithmic Currency Trading based on Reinforcement Learning Combining Action Shaping and Advantage Function Shaping
    Sun, Hongyong
    Sang, Nan
    Wu, Jia
    Wang, Chen
    2019 IEEE 31ST INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2019), 2019, : 1494 - 1498
  • [25] Multi-type data fusion framework based on deep reinforcement learning for algorithmic trading
    Liu, Peipei
    Zhang, Yunfeng
    Bao, Fangxun
    Yao, Xunxiang
    Zhang, Caiming
    APPLIED INTELLIGENCE, 2023, 53 (02) : 1683 - 1706
  • [26] Multi-type data fusion framework based on deep reinforcement learning for algorithmic trading
    Peipei Liu
    Yunfeng Zhang
    Fangxun Bao
    Xunxiang Yao
    Caiming Zhang
    Applied Intelligence, 2023, 53 : 1683 - 1706
  • [27] Application of A Deep Reinforcement Learning Method in Financial Market Trading
    Ma, Lixin
    Liu, Yang
    2019 11TH INTERNATIONAL CONFERENCE ON MEASURING TECHNOLOGY AND MECHATRONICS AUTOMATION (ICMTMA 2019), 2019, : 421 - 425
  • [28] Optimal Control with Reinforcement Learning using Reservoir Computing and Gaussian Mixture
    Engedy, Istvan
    Horvath, Gabor
    2012 IEEE INTERNATIONAL INSTRUMENTATION AND MEASUREMENT TECHNOLOGY CONFERENCE (I2MTC), 2012, : 1062 - 1066
  • [29] ON REINFORCEMENT LEARNING IN OPTIMIZATION HEURISTICS AND OPTIMAL METHOD SWITCHING
    Macek, Karel
    Kukal, Jaromir
    Bostik, Josef
    16TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING MENDEL 2010, 2010, : 22 - 28
  • [30] Reinforcement-learning-based optimal trading in a simulated futures market with heterogeneous agents
    Aydin, Nadi Serhan
    SIMULATION-TRANSACTIONS OF THE SOCIETY FOR MODELING AND SIMULATION INTERNATIONAL, 2022, 98 (04): : 321 - 333