Algorithmic Trading Using Double Deep Q-Networks and Sentiment Analysis

被引:0
作者
Tabaro, Leon [1 ]
Kinani, Jean Marie Vianney [2 ]
Rosales-Silva, Alberto Jorge [3 ]
Salgado-Ramirez, Julio Cesar [4 ]
Mujica-Vargas, Dante [5 ]
Escamilla-Ambrosio, Ponciano Jorge [6 ]
Ramos-Diaz, Eduardo [7 ]
机构
[1] Loughborough Univ, Dept Comp Sci, Epinal Way, Loughborough LE11 3TU, England
[2] Inst Politecn Nacl UPIIH, Dept Mecatron, Pachuca 07738, Mexico
[3] Inst Politecn Nacl, Secc Estudios Posgrad & Invest, ESIME Zacatenco, Mexico City 07738, DF, Mexico
[4] Univ Politecn Pachuca, Ingn Biomed, Zempoala 43830, Mexico
[5] Tecnol Nacl Mex CENIDET, Dept Comp Sci, Interior Internado Palmira S-N, Palmira 62490, Cuernavaca, Mexico
[6] Inst Politecn Nacl, Ctr Invest Comp, Mexico City 07700, DF, Mexico
[7] Univ Autonoma Ciudad Mexico, Ingn Sistemas Elect & Telecomunicac, Mexico City 09790, DF, Mexico
关键词
deepreinforcement learning; automated trading systems; Q-learning; double deep Q-networks; sentiment analysis; stock market prediction; algorithmic trading;
D O I
10.3390/info15080473
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this work, we explore the application of deep reinforcement learning (DRL) to algorithmic trading. While algorithmic trading is focused on using computer algorithms to automate a predefined trading strategy, in this work, we train a Double Deep Q-Network (DDQN) agent to learn its own optimal trading policy, with the goal of maximising returns whilst managing risk. In this study, we extended our approach by augmenting the Markov Decision Process (MDP) states with sentiment analysis of financial statements, through which the agent achieved up to a 70% increase in the cumulative reward over the testing period and an increase in the Calmar ratio from 0.9 to 1.3. The experimental results also showed that the DDQN agent's trading strategy was able to consistently outperform the benchmark set by the buy-and-hold strategy. Additionally, we further investigated the impact of the length of the window of past market data that the agent considers when deciding on the best trading action to take. The results of this study have validated DRL's ability to find effective solutions and its importance in studying the behaviour of agents in markets. This work serves to provide future researchers with a foundation to develop more advanced and adaptive DRL-based trading systems.
引用
收藏
页数:24
相关论文
共 50 条
  • [21] Sentiment Analysis of Text using Deep Convolution Neural Networks
    Chachra, Anmol
    Mehndiratta, Pulkit
    Gupta, Mohit
    [J]. 2017 TENTH INTERNATIONAL CONFERENCE ON CONTEMPORARY COMPUTING (IC3), 2017, : 247 - 252
  • [22] Crowding Game and Deep Q-Networks for Dynamic RAN Slicing in 5G Networks
    Saad, Joe
    Khawam, Kinda
    Yassin, Mohamad
    Costanzo, Salvatore
    Boulos, Karen
    [J]. PROCEEDINGS OF THE 20TH ACM INTERNATIONAL SYMPOSIUM ON MOBILITY MANAGEMENT AND WIRELESS ACCESS, MOBIWAC 2022, 2022, : 37 - 46
  • [23] Deep Q-Networks Assisted Pre-connect Handover Management for 5G Networks
    Wei, Yao
    Lung, Chung-Horng
    Ajila, Samuel
    Cabrera, Ricardo Paredes
    [J]. 2023 IEEE 97TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-SPRING, 2023,
  • [24] Mitigation of Policy Manipulation Attacks on Deep Q-Networks with Parameter-Space Noise
    Behzadan, Vahid
    Munir, Arslan
    [J]. COMPUTER SAFETY, RELIABILITY, AND SECURITY, SAFECOMP 2018, 2018, 11094 : 406 - 417
  • [25] Design of an Iterative Method Leveraging Deep Q-Networks for Intrusion Detection System Operations
    Ch, Mahaboob Subhani Shaik
    Rao, Yamarthi Narasimha
    [J]. IEEE ACCESS, 2025, 13 : 48720 - 48745
  • [26] A deep Q-learning based algorithmic trading system for commodity futures markets
    Massahi, Mahdi
    Mahootchi, Masoud
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2024, 237
  • [27] Deep Reinforcement Learning Pairs Trading with a Double Deep Q-Network
    Brim, Andrew
    [J]. 2020 10TH ANNUAL COMPUTING AND COMMUNICATION WORKSHOP AND CONFERENCE (CCWC), 2020, : 222 - 227
  • [28] Sentiment Analysis of YouTube Video Comments Using Deep Neural Networks
    lassance Cunha, Alexandre Ashade
    Costa, Melissa Carvalho
    Pacheco, Marco Aurelio C.
    [J]. ARTIFICIAL INTELLIGENCEAND SOFT COMPUTING, PT I, 2019, 11508 : 561 - 570
  • [29] A Proposal for Reducing the Number of Trial-and-Error Searches for Deep Q-Networks Combined with Exploitation-Oriented Learning
    Kodama, Naoki
    Miyazaki, Kazuteru
    Harada, Taku
    [J]. 2018 17TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA), 2018, : 983 - 988
  • [30] Efficient Exploration Through Bootstrapped and Bayesian Deep Q-Networks for Joint Power Control and Beamforming in mmWave Networks
    Zeng, Bosen
    Zhong, Yong
    Niu, Xianhua
    [J]. IEEE COMMUNICATIONS LETTERS, 2023, 27 (02) : 566 - 570