Algorithmic Trading Using Double Deep Q-Networks and Sentiment Analysis

被引:0
|
作者
Tabaro, Leon [1 ]
Kinani, Jean Marie Vianney [2 ]
Rosales-Silva, Alberto Jorge [3 ]
Salgado-Ramirez, Julio Cesar [4 ]
Mujica-Vargas, Dante [5 ]
Escamilla-Ambrosio, Ponciano Jorge [6 ]
Ramos-Diaz, Eduardo [7 ]
机构
[1] Loughborough Univ, Dept Comp Sci, Epinal Way, Loughborough LE11 3TU, England
[2] Inst Politecn Nacl UPIIH, Dept Mecatron, Pachuca 07738, Mexico
[3] Inst Politecn Nacl, Secc Estudios Posgrad & Invest, ESIME Zacatenco, Mexico City 07738, DF, Mexico
[4] Univ Politecn Pachuca, Ingn Biomed, Zempoala 43830, Mexico
[5] Tecnol Nacl Mex CENIDET, Dept Comp Sci, Interior Internado Palmira S-N, Palmira 62490, Cuernavaca, Mexico
[6] Inst Politecn Nacl, Ctr Invest Comp, Mexico City 07700, DF, Mexico
[7] Univ Autonoma Ciudad Mexico, Ingn Sistemas Elect & Telecomunicac, Mexico City 09790, DF, Mexico
关键词
deepreinforcement learning; automated trading systems; Q-learning; double deep Q-networks; sentiment analysis; stock market prediction; algorithmic trading;
D O I
10.3390/info15080473
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this work, we explore the application of deep reinforcement learning (DRL) to algorithmic trading. While algorithmic trading is focused on using computer algorithms to automate a predefined trading strategy, in this work, we train a Double Deep Q-Network (DDQN) agent to learn its own optimal trading policy, with the goal of maximising returns whilst managing risk. In this study, we extended our approach by augmenting the Markov Decision Process (MDP) states with sentiment analysis of financial statements, through which the agent achieved up to a 70% increase in the cumulative reward over the testing period and an increase in the Calmar ratio from 0.9 to 1.3. The experimental results also showed that the DDQN agent's trading strategy was able to consistently outperform the benchmark set by the buy-and-hold strategy. Additionally, we further investigated the impact of the length of the window of past market data that the agent considers when deciding on the best trading action to take. The results of this study have validated DRL's ability to find effective solutions and its importance in studying the behaviour of agents in markets. This work serves to provide future researchers with a foundation to develop more advanced and adaptive DRL-based trading systems.
引用
收藏
页数:24
相关论文
共 50 条
  • [31] Image2GIF: Generating Cinemagraphs using Recurrent Deep Q-Networks
    Zhou, Yipin
    Song, Yale
    Berg, Tamara L.
    2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018), 2018, : 170 - 178
  • [32] Modeling Collective Behavior for Fish School With Deep Q-Networks
    Chen, Pengyu
    Wang, Fang
    Liu, Shuo
    Yu, Yifan
    Yue, Shengzhi
    Song, Yanan
    Lin, Yuanshan
    IEEE ACCESS, 2023, 11 : 36630 - 36641
  • [33] Sustainable supply chain management: A green computing approach using deep Q-networks
    Yuan, Di
    Wang, Yue
    SUSTAINABLE COMPUTING-INFORMATICS & SYSTEMS, 2025, 45
  • [34] Uncovering instabilities in variational-quantum deep Q-networks
    Franz, Maja
    Wolf, Lucas
    Periyasamy, Maniraman
    Ufrecht, Christian
    Scherer, Daniel D.
    Plinge, Axel
    Mutschler, Christopher
    Mauerer, Wolfgang
    JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS, 2023, 360 (17): : 13822 - 13844
  • [35] Feasibility Evaluation of Oversize Load Transportation Using Conditional Rewarded Deep Q-Networks
    Son, Hojoon
    Kim, Jongkyu
    Jung, Hyunjin
    Lee, Minsu
    Lee, Soo-Hong
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (06) : 5011 - 5021
  • [36] Traffic Shaping with Deep Q-Networks for Optimizing the Age of Information
    Lent, Ricardo
    2023 IEEE LATIN-AMERICAN CONFERENCE ON COMMUNICATIONS, LATINCOM, 2023,
  • [37] Using strongly typed genetic programming to combine technical and sentiment analysis for algorithmic trading
    Christodoulaki, Eva
    Kampouridis, Michael
    2022 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC), 2022,
  • [38] Deep Q-Networks for Minimizing Total Tardiness on a Single Machine
    Huang, Kuan Wei
    Lin, Bertrand M. T.
    MATHEMATICS, 2025, 13 (01)
  • [39] Model Free Optimal Control of Two Whole Buildings using Deep Q-networks
    Ahn, Ki Uhn
    Kim, Jae Min
    Kim, Youngsub
    Park, Cheol Soo
    Kim, Kwang Woo
    PROCEEDINGS OF BUILDING SIMULATION 2019: 16TH CONFERENCE OF IBPSA, 2020, : 2848 - 2855
  • [40] Joint Differential Game and Double Deep Q-Networks for Suppressing Malware Spread in Industrial Internet of Things
    Shen, Shigen
    Xie, Lanlan
    Zhang, Yanchun
    Wu, Guowen
    Zhang, Hong
    Yu, Shui
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 5302 - 5315