Algorithmic trading using combinational rule vector and deep reinforcement learning

被引:5
|
作者
Huang, Zhen [1 ]
Li, Ning [1 ]
Mei, Wenliang [2 ]
Gong, Wenyong [1 ]
机构
[1] Jinan Univ, Dept Math, Guangzhou, Peoples R China
[2] CHN Energy Investment Grp Co LTD, Beijing, Peoples R China
关键词
Algorithmic trading; Combinational rule vectors; Deep reinforcement learning;
D O I
10.1016/j.asoc.2023.110802
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Algorithmic trading rules are widely used in financial markets as technical analysis tools for security trading. However, traditional trading rules are not sufficient to make a trading decision. In this paper, we propose a new algorithmic trading method called CR-DQN, which incorporates deep Q-learning with two popular trading rules: moving average (MA) and trading range break-out (TRB). The input of deep Q-learning is combinational rule vectors, whose component is a linear combination of 140 rules produced by MA and TRB with different parameters. Due to non-stationary characteristics, we devise a reward driven combination weight updating scheme to generate combinational rule vectors, which can capture intrinsic features of financial data. Since the sparse reward exists in CR-DQN, we design a piecewise reward function which shows great potential in the experiments. Taking combinational rule vectors as input, the LSTM based Deep Q-learning network is used to learn an optimal algorithmic trading strategy. We apply our model to both Chinese and non-Chinese stock markets, and CR-DQN exhibits the best performance on a variety of evaluation criteria compared to many other approaches, demonstrating the effectiveness of the proposed method.(c) 2023 Elsevier B.V. All rights reserved.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] An application of deep reinforcement learning to algorithmic trading
    Theate, Thibaut
    Ernst, Damien
    EXPERT SYSTEMS WITH APPLICATIONS, 2021, 173
  • [2] Algorithmic trading using continuous action space deep reinforcement learning
    Majidi, Naseh
    Shamsi, Mahdi
    Marvasti, Farokh
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 235
  • [3] Deep Robust Reinforcement Learning for Practical Algorithmic Trading
    Li, Yang
    Zheng, Wanshan
    Zheng, Zibin
    IEEE ACCESS, 2019, 7 : 108014 - 108022
  • [4] Using Reinforcement Learning in the Algorithmic Trading Problem
    E. S. Ponomarev
    I. V. Oseledets
    A. S. Cichocki
    Journal of Communications Technology and Electronics, 2019, 64 : 1450 - 1457
  • [5] Using Reinforcement Learning in the Algorithmic Trading Problem
    Ponomarev, E. S.
    Oseledets, I. V.
    Cichocki, A. S.
    JOURNAL OF COMMUNICATIONS TECHNOLOGY AND ELECTRONICS, 2019, 64 (12) : 1450 - 1457
  • [6] Sentiment and Knowledge Based Algorithmic Trading with Deep Reinforcement Learning
    Nan, Abhishek
    Perumal, Anandh
    Zaiane, Osmar R.
    DATABASE AND EXPERT SYSTEMS APPLICATIONS, DEXA 2022, PT I, 2022, 13426 : 167 - 180
  • [7] Practical Algorithmic Trading Using State Representation Learning and Imitative Reinforcement Learning
    Park, Deog-Yeong
    Lee, Ki-Hoon
    IEEE ACCESS, 2021, 9 : 152310 - 152321
  • [8] Intelligent Algorithmic Trading Strategy Using Reinforcement Learning and Directional Change
    Aloud, Monira Essa
    Alkhamees, Nora
    IEEE ACCESS, 2021, 9 : 114659 - 114671
  • [9] A Mean-VaR Based Deep Reinforcement Learning Framework for Practical Algorithmic Trading
    Jin, Boyi
    IEEE ACCESS, 2023, 11 : 28920 - 28933
  • [10] A novel deep reinforcement learning framework with BiLSTM-Attention networks for algorithmic trading
    Huang, Yuling
    Wan, Xiaoxiao
    Zhang, Lin
    Lu, Xiaoping
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 240