Hybrid Deep Reinforcement Learning for Pairs Trading

被引:13
作者
Kim, Sang-Ho [1 ]
Park, Deog-Yeong [1 ]
Lee, Ki-Hoon [1 ]
机构
[1] Kwangwoon Univ, Sch Comp & Informat Engn, 20 Kwangwoon Ro, Seoul 01897, South Korea
来源
APPLIED SCIENCES-BASEL | 2022年 / 12卷 / 03期
基金
新加坡国家研究基金会;
关键词
algorithmic trading; pairs trading; deep learning; reinforcement learning; TIME-SERIES; REPRESENTATION; COINTEGRATION;
D O I
10.3390/app12030944
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Pairs trading is an investment strategy that exploits the short-term price difference (spread) between two co-moving stocks. Recently, pairs trading methods based on deep reinforcement learning have yielded promising results. These methods can be classified into two approaches: (1) indirectly determining trading actions based on trading and stop-loss boundaries and (2) directly determining trading actions based on the spread. In the former approach, the trading boundary is completely dependent on the stop-loss boundary, which is certainly not optimal. In the latter approach, there is a risk of significant loss because of the absence of a stop-loss boundary. To overcome the disadvantages of the two approaches, we propose a hybrid deep reinforcement learning method for pairs trading called HDRL-Trader, which employs two independent reinforcement learning networks; one for determining trading actions and the other for determining stop-loss boundaries. Furthermore, HDRL-Trader incorporates novel techniques, such as dimensionality reduction, clustering, regression, behavior cloning, prioritized experience replay, and dynamic delay, into its architecture. The performance of HDRL-Trader is compared with the state-of-the-art reinforcement learning methods for pairs trading (P-DDQN, PTDQN, and P-Trader). The experimental results for twenty stock pairs in the Standard & Poor's 500 index show that HDRL-Trader achieves an average return rate of 82.4%, which is 25.7%P higher than that of the second-best method, and yields significantly positive return rates for all stock pairs.
引用
收藏
页数:23
相关论文
共 50 条
[41]   Beating the Stock Market with a Deep Reinforcement Learning Day Trading System [J].
Conegundes, Leonardo ;
Machado Pereira, Adriano C. .
2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
[42]   Intelligent Demand Response Resource Trading Using Deep Reinforcement Learning [J].
Zhang, Yufan ;
Ai, Qian ;
Li, Zhaoyu .
CSEE JOURNAL OF POWER AND ENERGY SYSTEMS, 2024, 10 (06) :2621-2630
[43]   Application of deep reinforcement learning in stock trading strategies and stock forecasting [J].
Li, Yuming ;
Ni, Pin ;
Chang, Victor .
COMPUTING, 2020, 102 (06) :1305-1322
[44]   From Reinforcement Learning to Deep Reinforcement Learning: An Overview [J].
Agostinelli, Forest ;
Hocquet, Guillaume ;
Singh, Sameer ;
Baldi, Pierre .
BRAVERMAN READINGS IN MACHINE LEARNING: KEY IDEAS FROM INCEPTION TO CURRENT STATE, 2018, 11100 :298-328
[45]   Using Reinforcement Learning in the Algorithmic Trading Problem [J].
Ponomarev, E. S. ;
Oseledets, I. V. ;
Cichocki, A. S. .
JOURNAL OF COMMUNICATIONS TECHNOLOGY AND ELECTRONICS, 2019, 64 (12) :1450-1457
[46]   Learning financial asset-specific trading rules via deep reinforcement learning [J].
Taghian, Mehran ;
Asadi, Ahmad ;
Safabakhsh, Reza .
EXPERT SYSTEMS WITH APPLICATIONS, 2022, 195
[47]   Using Reinforcement Learning in the Algorithmic Trading Problem [J].
E. S. Ponomarev ;
I. V. Oseledets ;
A. S. Cichocki .
Journal of Communications Technology and Electronics, 2019, 64 :1450-1457
[48]   Improving Pairs Trading Strategies Using Two-Stage Deep Learning Methods and Analyses of Time (In)variant Inputs for Trading Performance [J].
Kuo, Wei-Lun ;
Chang, Wei-Che ;
Dai, Tian-Shyr ;
Chen, Ying-Ping ;
Chang, Hao-Han .
IEEE ACCESS, 2022, 10 :97030-97046
[49]   Reinforcement Learning in Stock Trading [J].
Quang-Vinh Dang .
ADVANCED COMPUTATIONAL METHODS FOR KNOWLEDGE ENGINEERING (ICCSAMA 2019), 2020, 1121 :311-322
[50]   Reinforcement Learning for Quantitative Trading [J].
Sun, Shuo ;
Wang, Rundong ;
An, Bo .
ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2023, 14 (03)