Combining transformer based deep reinforcement learning with Black-Litterman model for portfolio optimization

被引:0
|
作者
Ruoyu Sun [1 ]
Angelos Stefanidis [2 ]
Zhengyong Jiang [2 ]
Jionglong Su [2 ]
机构
[1] Xi’an Jiaotong-Liverpool University,Department of Financial and Actuarial Mathematics, School of Mathematics and Physics
[2] XJTLU Entrepreneur College (Taicang),School of AI and Advanced Computing
[3] Xi’an Jiaotong-Liverpool University,undefined
关键词
Deep reinforcement learning; Portfolio optimization; Black-Litterman model; Transformer neural network;
D O I
10.1007/s00521-024-09805-9
中图分类号
学科分类号
摘要
As a model-free algorithm, deep reinforcement learning (DRL) agent learns and makes decisions by interacting with the environment in an unsupervised way. In recent years, DRL algorithms have been widely applied by scholars for portfolio optimization in consecutive trading periods, since the DRL agent can dynamically adapt to market changes and does not rely on the specification of the joint dynamics across the assets. However, typical DRL agents for portfolio optimization cannot learn a policy that is aware of the dynamic correlation between portfolio asset returns. Since the dynamic correlations among portfolio assets are crucial in optimizing the portfolio, the lack of such knowledge makes it difficult for the DRL agent to maximize the return per unit of risk, especially when the target market permits short selling (i.e., the US stock market). In this research, we propose a hybrid portfolio optimization model combining the DRL agent and the Black-Litterman (BL) model to enable the DRL agent to learn the dynamic correlation between the portfolio asset returns and implement an efficacious long/short strategy based on the correlation. Essentially, the DRL agent is trained to learn the policy to apply the BL model to determine the target portfolio weights. In this model, we formulate a specific objective function based on the environment’s reward function, which considers the return, risk, and transaction scale of the portfolio. Our DRL agent is trained by propagating the objective function’s gradient to the policy function of our DRL agent. To test our DRL agent, we construct the portfolio based on all the Dow Jones Industrial Average constitute stocks. Empirical results of the experiments conducted on real-world United States stock market data demonstrate that our DRL agent significantly outperforms various comparison portfolio choice strategies and alternative DRL frameworks by at least 42% in terms of accumulated return. In terms of the return per unit of risk, our DRL agent significantly outperforms various comparative portfolio choice strategies and alternative strategies based on other machine learning frameworks.
引用
收藏
页码:20111 / 20146
页数:35
相关论文
共 50 条
  • [41] Intelligent Black-Litterman Portfolio Optimization Using a Decomposition-Based Multi-Objective DIRECT Algorithm
    Li, Chen
    Chen, Yidong
    Yang, Xueying
    Wang, Zitian
    Lu, Zhonghua
    Chi, Xuebin
    APPLIED SCIENCES-BASEL, 2022, 12 (14):
  • [43] The study of mixed assets allocation based on Black-Litterman model
    Lin, Jianwu
    Tang, Mengwei
    Wang, Jiachang
    He, Ping
    INTERNATIONAL JOURNAL OF FINANCIAL ENGINEERING, 2021, 8 (04)
  • [44] The Black-Litterman model: A risk budgeting perspective
    O'Toole R.
    Journal of Asset Management, 2013, 14 (1) : 2 - 13
  • [45] Using the Black-Litterman Model: A View on Opinions
    Martin, Kenneth J.
    Sankaran, Harikumar
    JOURNAL OF INVESTING, 2019, 28 (01): : 112 - 122
  • [46] Black-Litterman Model with Multiple Experts' Linguistic Views
    Bartkowiak, Marcin
    Rutkowska, Aleksandra
    SOFT METHODS FOR DATA SCIENCE, 2017, 456 : 35 - 43
  • [47] An application of the Black-Litterman model with EGARCH-M-derived views for international portfolio management
    Beach S.L.
    Orlov A.G.
    Financial Markets and Portfolio Management, 2007, 21 (2): : 147 - 166
  • [48] Portfolio with Copula-GARCH and Black-Litterman Model Using a Novel View Error Matrix
    Deng, Xue
    Zhou, Wen
    Cao, Qi
    COMPUTATIONAL ECONOMICS, 2024,
  • [49] Asset allocation based on AdaBoost ensemble algorithm and the Black-Litterman model
    Yao H.
    Li X.
    Fang Y.
    Xitong Gongcheng Lilun yu Shijian/System Engineering Theory and Practice, 2023, 43 (11): : 3182 - 3196
  • [50] Time-varying Black-Litterman portfolio optimization using a bio-inspired approach and neuronets
    Simos, Theodore E.
    Mourtas, Spyridon D.
    Katsikis, Vasilios N.
    APPLIED SOFT COMPUTING, 2021, 112