Algorithmic Trading Using Double Deep Q-Networks and Sentiment Analysis

被引:0
|
作者
Tabaro, Leon [1 ]
Kinani, Jean Marie Vianney [2 ]
Rosales-Silva, Alberto Jorge [3 ]
Salgado-Ramirez, Julio Cesar [4 ]
Mujica-Vargas, Dante [5 ]
Escamilla-Ambrosio, Ponciano Jorge [6 ]
Ramos-Diaz, Eduardo [7 ]
机构
[1] Loughborough Univ, Dept Comp Sci, Epinal Way, Loughborough LE11 3TU, England
[2] Inst Politecn Nacl UPIIH, Dept Mecatron, Pachuca 07738, Mexico
[3] Inst Politecn Nacl, Secc Estudios Posgrad & Invest, ESIME Zacatenco, Mexico City 07738, DF, Mexico
[4] Univ Politecn Pachuca, Ingn Biomed, Zempoala 43830, Mexico
[5] Tecnol Nacl Mex CENIDET, Dept Comp Sci, Interior Internado Palmira S-N, Palmira 62490, Cuernavaca, Mexico
[6] Inst Politecn Nacl, Ctr Invest Comp, Mexico City 07700, DF, Mexico
[7] Univ Autonoma Ciudad Mexico, Ingn Sistemas Elect & Telecomunicac, Mexico City 09790, DF, Mexico
关键词
deepreinforcement learning; automated trading systems; Q-learning; double deep Q-networks; sentiment analysis; stock market prediction; algorithmic trading;
D O I
10.3390/info15080473
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this work, we explore the application of deep reinforcement learning (DRL) to algorithmic trading. While algorithmic trading is focused on using computer algorithms to automate a predefined trading strategy, in this work, we train a Double Deep Q-Network (DDQN) agent to learn its own optimal trading policy, with the goal of maximising returns whilst managing risk. In this study, we extended our approach by augmenting the Markov Decision Process (MDP) states with sentiment analysis of financial statements, through which the agent achieved up to a 70% increase in the cumulative reward over the testing period and an increase in the Calmar ratio from 0.9 to 1.3. The experimental results also showed that the DDQN agent's trading strategy was able to consistently outperform the benchmark set by the buy-and-hold strategy. Additionally, we further investigated the impact of the length of the window of past market data that the agent considers when deciding on the best trading action to take. The results of this study have validated DRL's ability to find effective solutions and its importance in studying the behaviour of agents in markets. This work serves to provide future researchers with a foundation to develop more advanced and adaptive DRL-based trading systems.
引用
收藏
页数:24
相关论文
共 50 条
  • [41] Intelligent Resource Management Using Multiagent Double Deep Q-Networks to Guarantee Strict Reliability and Low Latency in IoT Network
    Salh, Adeb
    Ngah, Razali
    Hussain, Ghasan Ali
    Audah, Lukman
    Alhartomi, Mohammed
    Abdullah, Qazwan
    Alsulami, Ruwaybih
    Alzahrani, Saeed
    Alzahmi, Ahmed
    IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY, 2022, 3 : 2245 - 2257
  • [42] Delay-Aware and Energy-Efficient Carrier Aggregation in 5G Using Double Deep Q-Networks
    Khoramnejad, Fahime
    Joda, Roghayeh
    Bin Sediq, Akram
    Abou-Zeid, Hatem
    Atawia, Ramy
    Boudreau, Gary
    Erol-Kantarci, Melike
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2022, 70 (10) : 6615 - 6629
  • [43] Reinforcement Learning with an Ensemble of Binary Action Deep Q-Networks
    Hafiz A.M.
    Hassaballah M.
    Alqahtani A.
    Alsubai S.
    Hameed M.A.
    Computer Systems Science and Engineering, 2023, 46 (03): : 2651 - 2666
  • [44] Hand Gesture Recognition Using EMG-IMU Signals and Deep Q-Networks
    Vasconez, Juan Pablo
    Barona Lopez, Lorena Isabel
    Valdivieso Caraguay, Angel Leonardo
    Benalcazar, Marco E.
    SENSORS, 2022, 22 (24)
  • [45] Deep Q-Networks and 5G Technology for Flight Analysis and Trajectory Prediction
    Srivatsa, V
    Vivek, B. V.
    Kusuma, S. M.
    10TH INTERNATIONAL CONFERENCE ON ELECTRONICS, COMPUTING AND COMMUNICATION TECHNOLOGIES, CONECCT 2024, 2024,
  • [46] Algorithmic Forex Trading Using Q-learning
    Zahrah, Hasna Haifa
    Tirtawangsa, Jimmy
    ARTIFICIAL INTELLIGENCE APPLICATIONS AND INNOVATIONS, AIAI 2023, PT I, 2023, 675 : 24 - 35
  • [47] Spatio-Temporal Deep Q-Networks for Human Activity Localization
    Xu, Wanru
    Yu, Jian
    Miao, Zhenjiang
    Wan, Lili
    Ji, Qiang
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (09) : 2984 - 2999
  • [48] A3DQN: Adaptive Anderson Acceleration for Deep Q-Networks
    Ermis, Melike
    Yang, Insoon
    2020 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2020, : 250 - 257
  • [49] CEMDQN: Cognitive-inspired Episodic Memory in Deep Q-networks
    Srivastava, Satyam
    Rathore, Heena
    Tiwari, Kamlesh
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [50] Learning How to Drive in a Real World Simulation with Deep Q-Networks
    Wolf, Peter
    Hubschneider, Christian
    Weber, Michael
    Bauer, Andre
    Haertl, Jonathan
    Duerr, Fabian
    Zoellner, J. Marius
    2017 28TH IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV 2017), 2017, : 244 - 250