Regularized stochastic dual dynamic programming for convex nonlinear optimization problems

被引:8
|
作者
Guigues, Vincent [1 ]
Lejeune, Miguel A. [2 ]
Tekaya, Wajdi [3 ]
机构
[1] FGV Praia Botafogo, Sch Appl Math, Rio De Janeiro, Brazil
[2] George Washington Univ, Washington, DC 20052 USA
[3] Quant Dev, Hammam Chatt 1164, Tunisia
关键词
Stochastic optimization; Stochastic dual dynamic programming; Regularization; Portfolio selection; Market impact costs; DECOMPOSITION METHODS; TRANSACTION COSTS; LINEAR-PROGRAMS; RISK; CONVERGENCE; PORTFOLIO; SDDP;
D O I
10.1007/s11081-020-09511-0
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
We define a regularized variant of the dual dynamic programming algorithm called DDP-REG to solve nonlinear dynamic programming equations. We extend the algorithm to solve nonlinear stochastic dynamic programming equations. The corresponding algorithm, called SDDP-REG, can be seen as an extension of a regularization of the stochastic dual dynamic programming (SDDP) algorithm recently introduced which was studied for linear problems only and with less general prox-centers. We show the convergence of DDP-REG and SDDP-REG. We assess the performance of DDP-REG and SDDP-REG on portfolio models with direct transaction and market impact costs. In particular, we propose a risk-neutral portfolio selection model which can be cast as a multistage stochastic second-order cone program. The formulation is motivated by the impact of market impact costs on large portfolio rebalancing operations. Numerical simulations show that DDP-REG is much quicker than DDP on all problem instances considered (up to 184 times quicker than DDP) and that SDDP-REG is quicker on the instances of portfolio selection problems with market impact costs tested and much faster on the instance of risk-neutral multistage stochastic linear program implemented (8.2 times faster).
引用
收藏
页码:1133 / 1165
页数:33
相关论文
共 50 条