Algorithmic trading using combinational rule vector and deep reinforcement learning

被引:5
|
作者
Huang, Zhen [1 ]
Li, Ning [1 ]
Mei, Wenliang [2 ]
Gong, Wenyong [1 ]
机构
[1] Jinan Univ, Dept Math, Guangzhou, Peoples R China
[2] CHN Energy Investment Grp Co LTD, Beijing, Peoples R China
关键词
Algorithmic trading; Combinational rule vectors; Deep reinforcement learning;
D O I
10.1016/j.asoc.2023.110802
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Algorithmic trading rules are widely used in financial markets as technical analysis tools for security trading. However, traditional trading rules are not sufficient to make a trading decision. In this paper, we propose a new algorithmic trading method called CR-DQN, which incorporates deep Q-learning with two popular trading rules: moving average (MA) and trading range break-out (TRB). The input of deep Q-learning is combinational rule vectors, whose component is a linear combination of 140 rules produced by MA and TRB with different parameters. Due to non-stationary characteristics, we devise a reward driven combination weight updating scheme to generate combinational rule vectors, which can capture intrinsic features of financial data. Since the sparse reward exists in CR-DQN, we design a piecewise reward function which shows great potential in the experiments. Taking combinational rule vectors as input, the LSTM based Deep Q-learning network is used to learn an optimal algorithmic trading strategy. We apply our model to both Chinese and non-Chinese stock markets, and CR-DQN exhibits the best performance on a variety of evaluation criteria compared to many other approaches, demonstrating the effectiveness of the proposed method.(c) 2023 Elsevier B.V. All rights reserved.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Portfolio dynamic trading strategies using deep reinforcement learning
    Day, Min-Yuh
    Yang, Ching-Ying
    Ni, Yensen
    SOFT COMPUTING, 2023, 28 (15-16) : 8715 - 8730
  • [22] DEEP REINFORCEMENT LEARNING FOR FINANCIAL TRADING USING PRICE TRAILING
    Zarkias, Konstantinos Saitas
    Passalis, Nikolaos
    Tsantekidis, Avraam
    Tefas, Anastasios
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 3067 - 3071
  • [23] A System for Trading Rule Search in Algorithmic Trading
    Tabata, Tomoaki
    Koita, Takahiro
    IMCIC'11: THE 2ND INTERNATIONAL MULTI-CONFERENCE ON COMPLEXITY, INFORMATICS AND CYBERNETICS, VOL II, 2011, : 56 - 57
  • [24] Deep Reinforcement Learning Robots for Algorithmic Trading: Considering Stock Market Conditions and US Interest Rates
    Park, Ji-Heon
    Kim, Jae-Hwan
    Huh, Jun-Ho
    IEEE ACCESS, 2024, 12 : 20705 - 20725
  • [25] Intelligent Demand Response Resource Trading Using Deep Reinforcement Learning
    Zhang, Yufan
    Ai, Qian
    Li, Zhaoyu
    CSEE JOURNAL OF POWER AND ENERGY SYSTEMS, 2024, 10 (06): : 2621 - 2630
  • [26] Empirical Analysis of Automated Stock Trading Using Deep Reinforcement Learning
    Kong, Minseok
    So, Jungmin
    APPLIED SCIENCES-BASEL, 2023, 13 (01):
  • [27] Hybrid Deep Reinforcement Learning for Pairs Trading
    Kim, Sang-Ho
    Park, Deog-Yeong
    Lee, Ki-Hoon
    APPLIED SCIENCES-BASEL, 2022, 12 (03):
  • [28] Deep differentiable reinforcement learning and optimal trading
    Jaisson, Thibault
    QUANTITATIVE FINANCE, 2022, 22 (08) : 1429 - 1443
  • [29] Deep Reinforcement Learning to Automate Cryptocurrency Trading
    Mahayana, Dimitri
    Shan, Elbert
    Fadhl'Abbas, Muhammad
    2022 12TH INTERNATIONAL CONFERENCE ON SYSTEM ENGINEERING AND TECHNOLOGY (ICSET 2022), 2022, : 36 - 41
  • [30] A new hybrid method of recurrent reinforcement learning and BiLSTM for algorithmic trading
    Huang, Yuling
    Song, Yunlin
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2023, 45 (02) : 1939 - 1951