Nonstationary denumerable state Markov decision processes – with average variance criterion

被引:0
|
作者
Xianping Guo
机构
[1] Department of Mathematics,
[2] Zhongshan University,undefined
[3] Guangzhou,undefined
[4] 510275,undefined
[5] P. R. China (e-mail: stsdaiy@zsulink.zsu.edu.cn),undefined
关键词
Key words: Discrete; time Markov decision processes; average expected criteria; optimality equations; average variance criterion; optimal Markov policies;
D O I
10.1007/PL00020908
中图分类号
学科分类号
摘要
. In this paper, we consider the nonstationary Markov decision processes (MDP, for short) with average variance criterion on a countable state space, finite action spaces and bounded one-step rewards. From the optimality equations which are provided in this paper, we translate the average variance criterion into a new average expected cost criterion. Then we prove that there exists a Markov policy, which is optimal in an original average expected reward criterion, that minimizies the average variance in the class of optimal policies for the original average expected reward criterion.
引用
收藏
页码:87 / 96
页数:9
相关论文
共 50 条