Smart Trading: A Novel Reinforcement Learning Framework for Quantitative Trading in Noisy Markets

被引:0
|
作者
Shen, Zhenyi [1 ]
Mao, Xiahong [2 ]
Wang, Chao [3 ]
Zhao, Dan [1 ]
Zhao, Shuangxue [1 ]
机构
[1] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou, Peoples R China
[2] Bank Hangzhou Co Ltd, Hangzhou, Peoples R China
[3] Swinburne Univ Technol, Hawthorn, Vic 3122, Australia
关键词
Reinforcement learning; Discrete features; Quantitative trading;
D O I
10.1007/978-981-97-5663-6_14
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The trading market's complexity is heightened as capital asset prices can be significantly influenced by traders' emotions. While the FinRL library provides a state-of-the-art reinforcement learning framework for training agents to trade in markets, it lacks the necessary approaches to counteract market noise and boost the agent's learning process in the complex environment. This paper proposes a novel reinforcement learning framework for quantitative trading to empower the agent to operate more effectively within the intricacies of noisy markets. Discrete features are used as inputs instead of continuous features to reduce the complexity of input features. This discretization process reduces market noise, thereby simplifying the agent's learning process as the feature space becomes more manageable after discretization. A theorem is introduced as a guide for choosing discrete features based on the available sample size. Within the trading environment, an adaptive scalar is employed to eliminate the influence of historical trends to prevent agents from blindly adhering to these trends over input signals. Additionally, a low-pass filter is applied before the computation of immediate rewards to facilitate the model's training process. Experiments are performed on different datasets to demonstrate that the agents trained with the proposed framework can earn excess returns across markets.
引用
收藏
页码:158 / 170
页数:13
相关论文
共 50 条
  • [41] Smart Robotic Strategies and Advice for Stock Trading Using Deep Transformer Reinforcement Learning
    Malibari, Nadeem
    Katib, Iyad
    Mehmood, Rashid
    APPLIED SCIENCES-BASEL, 2022, 12 (24):
  • [42] Blockchain and Federated Reinforcement Learning for Vehicle-to-Everything Energy Trading in Smart Grids
    Moniruzzaman M.
    Yassine A.
    Benlamri R.
    IEEE Transactions on Artificial Intelligence, 5 (02): : 839 - 853
  • [43] Making financial trading by recurrent reinforcement learning
    Bertoluzzo, Francesco
    Corazza, Marco
    KNOWLEDGE-BASED INTELLIGENT INFORMATION AND ENGINEERING SYSTEMS: KES 2007 - WIRN 2007, PT II, PROCEEDINGS, 2007, 4693 : 619 - 626
  • [44] Using Reinforcement Learning in the Algorithmic Trading Problem
    Ponomarev, E. S.
    Oseledets, I. V.
    Cichocki, A. S.
    JOURNAL OF COMMUNICATIONS TECHNOLOGY AND ELECTRONICS, 2019, 64 (12) : 1450 - 1457
  • [45] Hybrid Deep Reinforcement Learning for Pairs Trading
    Kim, Sang-Ho
    Park, Deog-Yeong
    Lee, Ki-Hoon
    APPLIED SCIENCES-BASEL, 2022, 12 (03):
  • [46] Deep differentiable reinforcement learning and optimal trading
    Jaisson, Thibault
    QUANTITATIVE FINANCE, 2022, 22 (08) : 1429 - 1443
  • [47] Using Reinforcement Learning in the Algorithmic Trading Problem
    E. S. Ponomarev
    I. V. Oseledets
    A. S. Cichocki
    Journal of Communications Technology and Electronics, 2019, 64 : 1450 - 1457
  • [48] Deep Reinforcement Learning to Automate Cryptocurrency Trading
    Mahayana, Dimitri
    Shan, Elbert
    Fadhl'Abbas, Muhammad
    2022 12TH INTERNATIONAL CONFERENCE ON SYSTEM ENGINEERING AND TECHNOLOGY (ICSET 2022), 2022, : 36 - 41
  • [49] Trading financial indices with reinforcement learning agents
    Pendharkar, Parag C.
    Cusatis, Patrick
    EXPERT SYSTEMS WITH APPLICATIONS, 2018, 103 : 1 - 13
  • [50] An application of deep reinforcement learning to algorithmic trading
    Theate, Thibaut
    Ernst, Damien
    EXPERT SYSTEMS WITH APPLICATIONS, 2021, 173