A maximum principle for optimal control of discrete-time stochastic systems with Markov jump

被引:0
|
作者
Lin X.-Y. [1 ]
Wang X.-R. [1 ]
Zhang W.-H. [2 ]
机构
[1] College of Mathematics and Systems Science, Shandong University of Science and Technology, Shandong, Qingdao
[2] College of Electrical Engineering and Automation, Shandong University of Science and Technology, Shandong, Qingdao
基金
中国国家自然科学基金;
关键词
backward stochastic difference equations; Hamilton-Jacobi-Bellman equations; Markov jump; maximum principle; optimal control;
D O I
10.7641/CTA.2022.10807
中图分类号
学科分类号
摘要
The maximum principle (MP) of the discrete-time nonlinear stochastic optimal control problem is proved, in which the control systems are driven by both Markov jumps and multiplicative noise. Firstly, based on the adapted solutions of the backward stochastic difference equation, the linear functional with the constraint of a linear difference equation is represented. The Riesz theorem is used to prove the uniqueness of such representation. Secondly, the spike variation method is extend to the nonlinear stochastic difference equation with Markov jumps. The variation equation of such state equation is obtained. Thirdly, by introducing a Hamiltonian function, a necessary condition of the discrete-time nonlinear stochastic optimal control system with Markov jump is obtained. It is proved that the adjoint equation of the maximum principle of the system is a pair of backward stochastic difference equations. Moreover, a sufficient condition is also given and the corresponding Hamilton-Jacobi-Bellman equation is derived. Finally, a practical example is given to illustrate the practicability and feasibility of the proposed theory. © 2024 South China University of Technology. All rights reserved.
引用
收藏
页码:895 / 904
页数:9
相关论文
共 35 条
  • [1] BOLTYANSKII V G, GAMKRELIDZE R V, PONTRYAGIN L S., On the theory of optimal processes, Doklady Akademii Nauk SSSR, 110, 1, pp. 7-10, (1956)
  • [2] YONG J, ZHOU X Y., Stochastic Controls: Hamiltonian Systems and HJB Equations, (1999)
  • [3] PONTRYAGIN L S, BOLTYANSKI V G, GAMKRELIDZE R V., The Mathematical Theory of Optimal Control Processes, (1962)
  • [4] KUSHNER H J, SCHWEPPE F C., A maximum principle for stochastic control systems, Journal of Mathematical Analysis and Applications, 8, 2, pp. 287-302, (1964)
  • [5] KUSHNER H J., On the stochastic maximum principle: Fixed time of control, Journal of Mathematical Analysis and Applications, 11, pp. 78-92, (1965)
  • [6] KUSHNER H J., On the stochastic maximum principle with average constraints, Journal of Mathematical Analysis and Applications, 12, 1, pp. 13-26, (1965)
  • [7] BISMUT J M., On optimal control of linear stochastic equations with a linear-quadratic criterion, SIAM Journal on Control and Optimization, 15, 1, pp. 1-4, (1977)
  • [8] BISMUT J M., An introductory approach to duality in optimal stochastic control, SIAM Review, 20, 1, pp. 62-78, (1978)
  • [9] BENSOUSSAN A., Stochastic maximum principle for distributed parameter systems, Journal of the Franklin Institute, 315, 5, pp. 387-406, (1983)
  • [10] HAUSSMANN U G., Generalized solutions of the Hamilton-Jacobi equation of stochastic control, SIAM Journal on Control and Optimization, 32, 3, pp. 728-743, (1994)