A stationary policy and an initial state in an MDP (Markov decision process) induce a stationary probability distribution of the reward. The problem analyzed here is generating the Pareto optima in the sense of high mean and low variance of the stationary distribution. In the unichain case, Pareto optima can be computed either with policy improvement or with a linear program having the same number of variables and one more constraint than the formulation for gain-rate optimization. The same linear program suffices in the multichain case if the ergodic class is an element of choice.
机构:
Chuo Univ, Dept Ind & Syst Engn, Bunkyo Ku, 1-13-27 Kasuga, Tokyo 1128551, JapanChuo Univ, Dept Ind & Syst Engn, Bunkyo Ku, 1-13-27 Kasuga, Tokyo 1128551, Japan