Combining deep reinforcement learning with technical analysis and trend monitoring on cryptocurrency markets

被引:8
作者
Kochliaridis, Vasileios [1 ]
Kouloumpris, Eleftherios [1 ]
Vlahavas, Ioannis [1 ]
机构
[1] Aristotle Univ Thessaloniki, Sch Informat, Thessaloniki 54124, Greece
关键词
Deep reinforcement learning; Machine learning; Proximal policy optimization; Trading; Technical analysis; Risk optimization;
D O I
10.1007/s00521-023-08516-x
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cryptocurrency markets experienced a significant increase in the popularity, which motivated many financial traders to seek high profits in cryptocurrency trading. The predominant tool that traders use to identify profitable opportunities is technical analysis. Some investors and researchers also combined technical analysis with machine learning, in order to forecast upcoming trends in the market. However, even with the use of these methods, developing successful trading strategies is still regarded as an extremely challenging task. Recently, deep reinforcement learning (DRL) algorithms demonstrated satisfying performance in solving complicated problems, including the formulation of profitable trading strategies. While some DRL techniques have been successful in increasing profit and loss (PNL) measures, these techniques are not much risk-aware and present difficulty in maximizing PNL and lowering trading risks simultaneously. This research proposes the combination of DRL approaches with rule-based safety mechanisms to both maximize PNL returns and minimize trading risk. First, a DRL agent is trained to maximize PNL returns, using a novel reward function. Then, during the exploitation phase, a rule-based mechanism is deployed to prevent uncertain actions from being executed. Finally, another novel safety mechanism is proposed, which considers the actions of a more conservatively trained agent, in order to identify high-risk trading periods and avoid trading. Our experiments on 5 popular cryptocurrencies show that the integration of these three methods achieves very promising results.
引用
收藏
页码:21445 / 21462
页数:18
相关论文
共 17 条
[1]  
Arratia A., 2021, Journal of Banking and Financial Technology, V5, P1, DOI [10.1007/s42786-021-00027-4, DOI 10.1007/S42786-021-00027-4]
[2]   Portfolio constructions in cryptocurrency market: A CVaR-based deep reinforcement learning approach [J].
Cui, Tianxiang ;
Ding, Shusheng ;
Jin, Huan ;
Zhang, Yongmin .
ECONOMIC MODELLING, 2023, 119
[3]   Cryptocurrency trading: a comprehensive survey [J].
Fang, Fan ;
Ventre, Carmine ;
Basios, Michail ;
Kanthan, Leslie ;
Martinez-Rego, David ;
Wu, Fan ;
Li, Lingbo .
FINANCIAL INNOVATION, 2022, 8 (01)
[4]   To learn or not to learn? Evaluating autonomous, adaptive, automated traders in cryptocurrencies financial bubbles [J].
Guarino, Alfonso ;
Grilli, Luca ;
Santoro, Domenico ;
Messina, Francesco ;
Zaccagnino, Rocco .
NEURAL COMPUTING & APPLICATIONS, 2022, 34 (23) :20715-20756
[5]  
Huang J., 2019, The Journal of Finance and Data Science, V5, P140, DOI [DOI 10.1016/J.JFDS.2018.10.001, 10.1016/j.jfds.2018.10.001]
[6]  
Kochliaridis V., 2022, TRADERNET CR CRYPTOC, P304
[7]  
Lazaridis A, 2020, J ARTIF INTELL RES, V69, P1421
[8]   Online portfolio management via deep reinforcement learning with high-frequency data [J].
Li, Jiahao ;
Zhang, Yong ;
Yang, Xingyu ;
Chen, Liangwei .
INFORMATION PROCESSING & MANAGEMENT, 2023, 60 (03)
[9]  
Lin TCW, 2013, UCLA LAW REV, V60, P678
[10]   A deep Q-learning portfolio management framework for the cryptocurrency market [J].
Lucarelli, Giorgio ;
Borrotti, Matteo .
NEURAL COMPUTING & APPLICATIONS, 2020, 32 (23) :17229-17244