Smart Robotic Strategies and Advice for Stock Trading Using Deep Transformer Reinforcement Learning

被引:4
作者
Malibari, Nadeem [1 ]
Katib, Iyad [1 ]
Mehmood, Rashid [2 ]
机构
[1] King Abdulaziz Univ, Fac Comp & Informat Technol, Dept Comp Sci, Jeddah 21589, Saudi Arabia
[2] King Abdulaziz Univ, High Performance Comp Ctr, Jeddah 21589, Saudi Arabia
来源
APPLIED SCIENCES-BASEL | 2022年 / 12卷 / 24期
关键词
stock trading; transformer; deep reinforcement learning; machine learning; Tadawul; stocks; robotic advice; robotic strategies; TIME-SERIES; PERFORMANCE;
D O I
10.3390/app122412526
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
The many success stories of reinforcement learning (RL) and deep learning (DL) techniques have raised interest in their use for detecting patterns and generating constant profits from financial markets. In this paper, we combine deep reinforcement learning (DRL) with a transformer network to develop a decision transformer architecture for online trading. We use data from the Saudi Stock Exchange (Tadawul), one of the largest liquid stock exchanges globally. Specifically, we use the indices of four firms: Saudi Telecom Company, Al-Rajihi Banking and Investment, Saudi Electricity Company, and Saudi Basic Industries Corporation. To ensure the robustness and risk management of the proposed model, we consider seven reward functions: the Sortino ratio, cumulative returns, annual volatility, omega, the Calmar ratio, max drawdown, and normal reward without any risk adjustments. Our proposed DRL-based model provided the highest average increase in the net worth of Saudi Telecom Company, Saudi Electricity Company, Saudi Basic Industries Corporation, and Al-Rajihi Banking and Investment at 21.54%, 18.54%, 17%, and 19.36%, respectively. The Sortino ratio, cumulative returns, and annual volatility were found to be the best-performing reward functions. This work makes significant contributions to trading regarding long-term investment and profit goals.
引用
收藏
页数:33
相关论文
共 66 条
[1]   Towards Automated Machine Learning: Evaluation and Comparison of AutoML Approaches and Tools [J].
Anh Truong ;
Walters, Austin ;
Goodsitt, Jeremy ;
Hines, Keegan ;
Bruss, C. Bayan ;
Farivar, Reza .
2019 IEEE 31ST INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2019), 2019, :1471-1479
[2]  
[Anonymous], 2017, Deep Q-trading
[3]  
Argaam, 2022, RAJH BANKS BOARD PRO
[4]  
ArgaamPlus, TAD MARK CAP SLIPS 2
[5]   An ensemble of LSTM neural networks for high-frequency stock market classification [J].
Borovkova, Svetlana ;
Tsiamas, Ioannis .
JOURNAL OF FORECASTING, 2019, 38 (06) :600-619
[6]   Learning Optimal Q-Function Using Deep Boltzmann Machine for Reliable Trading of Cryptocurrency [J].
Bu, Seok-Jun ;
Cho, Sung-Bae .
INTELLIGENT DATA ENGINEERING AND AUTOMATED LEARNING - IDEAL 2018, PT I, 2018, 11314 :468-480
[7]  
Chen L., 2021, Advances in neural information processing systems, V34, p15084 15097, DOI 10.2139/ssrn.3971444
[8]  
Cobbe K, 2019, PR MACH LEARN RES, V97
[9]  
Daberius K., 2019, Deep Execution-Value and Policy Based Reinforcement Learning for Trading and Beating Market Benchmarks, DOI 10.2139/ssrn.3374766
[10]   Deep Direct Reinforcement Learning for Financial Signal Representation and Trading [J].
Deng, Yue ;
Bao, Feng ;
Kong, Youyong ;
Ren, Zhiquan ;
Dai, Qionghai .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2017, 28 (03) :653-664