Algorithmic trading using combinational rule vector and deep reinforcement learning

被引:5
|
作者
Huang, Zhen [1 ]
Li, Ning [1 ]
Mei, Wenliang [2 ]
Gong, Wenyong [1 ]
机构
[1] Jinan Univ, Dept Math, Guangzhou, Peoples R China
[2] CHN Energy Investment Grp Co LTD, Beijing, Peoples R China
关键词
Algorithmic trading; Combinational rule vectors; Deep reinforcement learning;
D O I
10.1016/j.asoc.2023.110802
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Algorithmic trading rules are widely used in financial markets as technical analysis tools for security trading. However, traditional trading rules are not sufficient to make a trading decision. In this paper, we propose a new algorithmic trading method called CR-DQN, which incorporates deep Q-learning with two popular trading rules: moving average (MA) and trading range break-out (TRB). The input of deep Q-learning is combinational rule vectors, whose component is a linear combination of 140 rules produced by MA and TRB with different parameters. Due to non-stationary characteristics, we devise a reward driven combination weight updating scheme to generate combinational rule vectors, which can capture intrinsic features of financial data. Since the sparse reward exists in CR-DQN, we design a piecewise reward function which shows great potential in the experiments. Taking combinational rule vectors as input, the LSTM based Deep Q-learning network is used to learn an optimal algorithmic trading strategy. We apply our model to both Chinese and non-Chinese stock markets, and CR-DQN exhibits the best performance on a variety of evaluation criteria compared to many other approaches, demonstrating the effectiveness of the proposed method.(c) 2023 Elsevier B.V. All rights reserved.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] A Stock Trading Strategy Based on Deep Reinforcement Learning
    Khemlichi, Firdaous
    Chougrad, Hiba
    Khamlichi, Youness Idrissi
    El Boushaki, Abdessamad
    Ben Ali, Safae El Haj
    ADVANCED INTELLIGENT SYSTEMS FOR SUSTAINABLE DEVELOPMENT (AI2SD'2020), VOL 2, 2022, 1418 : 920 - 928
  • [42] Improving exploration in deep reinforcement learning for stock trading
    Zemzem, Wiem
    Tagina, Moncef
    INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS IN TECHNOLOGY, 2023, 72 (04) : 288 - 295
  • [43] Deep Reinforcement Learning for Quantitative Trading: Challenges and Opportunities
    An, Bo
    Sun, Shuo
    Wang, Rundong
    IEEE INTELLIGENT SYSTEMS, 2022, 37 (02) : 23 - 26
  • [44] Deep Reinforcement Learning for Trading-A Critical Survey
    Millea, Adrian
    DATA, 2021, 6 (11)
  • [45] Outperforming algorithmic trading reinforcement learning systems: A supervised approach to the cryptocurrency market
    Felizardo, Leonardo Kanashiro
    Lima Paiva, Francisco Caio
    Graves, Catharine de Vita
    Matsumoto, Elia Yathie
    Reali Costa, Anna Helena
    Del-Moral-Hernandez, Emilio
    Brandimarte, Paolo
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 202
  • [46] Supervised actor-critic reinforcement learning with action feedback for algorithmic trading
    Qizhou Sun
    Yain-Whar Si
    Applied Intelligence, 2023, 53 : 16875 - 16892
  • [47] Supervised actor-critic reinforcement learning with action feedback for algorithmic trading
    Sun, Qizhou
    Si, Yain-Whar
    APPLIED INTELLIGENCE, 2023, 53 (13) : 16875 - 16892
  • [48] Time-driven feature-aware jointly deep reinforcement learning for financial signal representation and algorithmic trading
    Lei, Kai
    Zhang, Bing
    Li, Yu
    Yang, Min
    Shen, Ying
    EXPERT SYSTEMS WITH APPLICATIONS, 2020, 140
  • [49] MOT: A Mixture of Actors Reinforcement Learning Method by Optimal Transport for Algorithmic Trading
    Cheng, Xi
    Zhang, Jinghao
    Zeng, Yunan
    Xue, Wenfang
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PT IV, PAKDD 2024, 2024, 14648 : 30 - 42
  • [50] Algorithmic trading using machine learning and neural network
    Agarwal D.
    Sheth R.
    Shekokar N.
    Lecture Notes on Data Engineering and Communications Technologies, 2021, 66 : 407 - 421