Supervised actor-critic reinforcement learning with action feedback for algorithmic trading

被引:5
|
作者
Sun, Qizhou [1 ]
Si, Yain-Whar [1 ]
机构
[1] Univ Macau, Dept Comp & Informat Sci, Ave da Univ, Taipa, Macau, Peoples R China
关键词
Finance; Reinforcement learning; Supervised learning; Algorithmic trading; ENERGY;
D O I
10.1007/s10489-022-04322-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reinforcement learning is one of the promising approaches for algorithmic trading in financial markets. However, in certain situations, buy or sell orders issued by an algorithmic trading program may not be fulfilled entirely. By considering the actual scenarios from the financial markets, in this paper, we propose a novel framework named Supervised Actor-Critic Reinforcement Learning with Action Feedback (SACRL-AF) for solving this problem. The action feedback mechanism of SACRL-AF notifies the actor about the dealt positions and corrects the transitions of the replay buffer. Meanwhile, the dealt positions are used as the labels for the supervised learning. Recent studies have shown that Deep Deterministic Policy Gradient (DDPG) and Twin Delayed Deep Deterministic Policy Gradient (TD3) are more stable and superior to other actor-critic algorithms. Against this background, based on the proposed SACRL-AF framework, two reinforcement learning algorithms henceforth referred to as Supervised Deep Deterministic Policy Gradient with Action Feedback (SDDPG-AF) and Supervised Twin Delayed Deep Deterministic Policy Gradient with Action Feedback (STD3-AF) are proposed in this paper. Experimental results show that SDDPG-AF and STD3-AF achieve the state-of-art performance in profitability.
引用
收藏
页码:16875 / 16892
页数:18
相关论文
共 50 条
  • [31] Forward Actor-Critic for Nonlinear Function Approximation in Reinforcement Learning
    Veeriah, Vivek
    van Seijen, Harm
    Sutton, Richard S.
    AAMAS'17: PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2017, : 556 - 564
  • [32] THE APPLICATION OF ACTOR-CRITIC REINFORCEMENT LEARNING FOR FAB DISPATCHING SCHEDULING
    Kim, Namyong
    Shin, IIayong
    2017 WINTER SIMULATION CONFERENCE (WSC), 2017, : 4570 - 4571
  • [33] ACTOR-CRITIC DEEP REINFORCEMENT LEARNING FOR DYNAMIC MULTICHANNEL ACCESS
    Zhong, Chen
    Lu, Ziyang
    Gursoy, M. Cenk
    Velipasalar, Senem
    2018 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP 2018), 2018, : 599 - 603
  • [34] Asynchronous Actor-Critic for Multi-Agent Reinforcement Learning
    Xiao, Yuchen
    Tan, Weihao
    Amato, Christopher
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [35] An Actor-Critic Hierarchical Reinforcement Learning Model for Course Recommendation
    Liang, Kun
    Zhang, Guoqiang
    Guo, Jinhui
    Li, Wentao
    ELECTRONICS, 2023, 12 (24)
  • [36] Enhancing cotton irrigation with distributional actor-critic reinforcement learning
    Chen, Yi
    Lin, Meiwei
    Yu, Zhuo
    Sun, Weihong
    Fu, Weiguo
    He, Liang
    AGRICULTURAL WATER MANAGEMENT, 2025, 307
  • [37] Provable Benefits of Actor-Critic Methods for Offline Reinforcement Learning
    Zanette, Andrea
    Wainwright, Martin J.
    Brunskill, Emma
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [38] Swarm Reinforcement Learning Method Based on an Actor-Critic Method
    Iima, Hitoshi
    Kuroe, Yasuaki
    SIMULATED EVOLUTION AND LEARNING, 2010, 6457 : 279 - 288
  • [39] Manipulator Motion Planning based on Actor-Critic Reinforcement Learning
    Li, Qiang
    Nie, Jun
    Wang, Haixia
    Lu, Xiao
    Song, Shibin
    2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC), 2021, : 4248 - 4254
  • [40] A Prioritized objective actor-critic method for deep reinforcement learning
    Ngoc Duy Nguyen
    Thanh Thi Nguyen
    Peter Vamplew
    Richard Dazeley
    Saeid Nahavandi
    Neural Computing and Applications, 2021, 33 : 10335 - 10349