Hybrid Deep Reinforcement Learning for Pairs Trading

被引:13
作者
Kim, Sang-Ho [1 ]
Park, Deog-Yeong [1 ]
Lee, Ki-Hoon [1 ]
机构
[1] Kwangwoon Univ, Sch Comp & Informat Engn, 20 Kwangwoon Ro, Seoul 01897, South Korea
来源
APPLIED SCIENCES-BASEL | 2022年 / 12卷 / 03期
基金
新加坡国家研究基金会;
关键词
algorithmic trading; pairs trading; deep learning; reinforcement learning; TIME-SERIES; REPRESENTATION; COINTEGRATION;
D O I
10.3390/app12030944
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Pairs trading is an investment strategy that exploits the short-term price difference (spread) between two co-moving stocks. Recently, pairs trading methods based on deep reinforcement learning have yielded promising results. These methods can be classified into two approaches: (1) indirectly determining trading actions based on trading and stop-loss boundaries and (2) directly determining trading actions based on the spread. In the former approach, the trading boundary is completely dependent on the stop-loss boundary, which is certainly not optimal. In the latter approach, there is a risk of significant loss because of the absence of a stop-loss boundary. To overcome the disadvantages of the two approaches, we propose a hybrid deep reinforcement learning method for pairs trading called HDRL-Trader, which employs two independent reinforcement learning networks; one for determining trading actions and the other for determining stop-loss boundaries. Furthermore, HDRL-Trader incorporates novel techniques, such as dimensionality reduction, clustering, regression, behavior cloning, prioritized experience replay, and dynamic delay, into its architecture. The performance of HDRL-Trader is compared with the state-of-the-art reinforcement learning methods for pairs trading (P-DDQN, PTDQN, and P-Trader). The experimental results for twenty stock pairs in the Standard & Poor's 500 index show that HDRL-Trader achieves an average return rate of 82.4%, which is 25.7%P higher than that of the second-best method, and yields significantly positive return rates for all stock pairs.
引用
收藏
页数:23
相关论文
共 50 条
[31]   A multi-agent deep reinforcement learning framework for algorithmic trading in financial markets [J].
Shavandi, Ali ;
Khedmati, Majid .
EXPERT SYSTEMS WITH APPLICATIONS, 2022, 208
[32]   The Advance of Reinforcement Learning and Deep Reinforcement Learning [J].
Lyu, Le ;
Shen, Yang ;
Zhang, Sicheng .
2022 IEEE INTERNATIONAL CONFERENCE ON ELECTRICAL ENGINEERING, BIG DATA AND ALGORITHMS (EEBDA), 2022, :644-648
[33]   Practical Algorithmic Trading Using State Representation Learning and Imitative Reinforcement Learning [J].
Park, Deog-Yeong ;
Lee, Ki-Hoon .
IEEE ACCESS, 2021, 9 :152310-152321
[34]   Price spread prediction in high-frequency pairs trading using deep learning architectures [J].
Liou, Jyh-Hwa ;
Liu, Yun-Ti ;
Cheng, Li-Chen .
INTERNATIONAL REVIEW OF FINANCIAL ANALYSIS, 2024, 96
[35]   Self-attention based deep direct recurrent reinforcement learning with hybrid loss for trading signal generation [J].
Kwak, Dongkyu ;
Choi, Sungyoon ;
Chang, Woojin .
INFORMATION SCIENCES, 2023, 623 :592-606
[36]   Algorithmic trading using continuous action space deep reinforcement learning [J].
Majidi, Naseh ;
Shamsi, Mahdi ;
Marvasti, Farokh .
EXPERT SYSTEMS WITH APPLICATIONS, 2024, 235
[37]   Algorithmic trading using combinational rule vector and deep reinforcement learning [J].
Huang, Zhen ;
Li, Ning ;
Mei, Wenliang ;
Gong, Wenyong .
APPLIED SOFT COMPUTING, 2023, 147
[38]   Deep Reinforcement Learning Based Optimization and Risk Control of Trading Strategies [J].
Bao, Mengrui .
JOURNAL OF ELECTRICAL SYSTEMS, 2024, 20 (05) :241-252
[39]   High-Dimensional Stock Portfolio Trading with Deep Reinforcement Learning [J].
Pigorsch, Uta ;
Schaefer, Sebastian .
2022 IEEE SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE FOR FINANCIAL ENGINEERING AND ECONOMICS (CIFER), 2022,
[40]   Application of deep reinforcement learning in stock trading strategies and stock forecasting [J].
Yuming Li ;
Pin Ni ;
Victor Chang .
Computing, 2020, 102 :1305-1322