Receding horizon approach to Markov games for infinite horizon discounted cost

被引:0
|
作者
Chang, HS [1 ]
Marcus, SI [1 ]
机构
[1] Univ Maryland, Dept Elect & Comp Engn, College Pk, MD 20742 USA
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We consider a receding horizon approach as an approximate solution to two-person zero-sum Markov games with an infinite horizon discounted cost criterion. We first present error bounds from the optimal equilibrium value of the game when both players take "correlated" receding horizon policies that are based on exact or approximate solutions of receding finite horizon subgames. Motivated by the worst-case optimal control of queueing systems by Altman [1], we then analyze error bounds when the minimizer plays the (approximate) receding horizon control and the maximizer plays the worst case policy. We give two heuristic examples of the approximate receding horizon control. We extend "parallel rollout" and "hindsight optimization" by Chang et al. [11, 13] into the Markov game setting within the framework of the approximate receding horizon approach and analyze their performances. From the parallel rollout approach, the minimizing player seeks to combine dynamically multiple heuristic policies in a set to improve the performances of all of the heuristic policies simultaneously under the guess that the maximizing player has chosen a fixed worst-case policy. Given epsilon > 0, we give the value of the receding horizon which guarantees that the parallel rollout policy with the horizon played by the minimizer "dominates" any heuristic policy in the set by c. From the hindsight optimization approach, the minimizing player makes a decision based on his expected optimal hindsight performance over a finite horizon. We finally discuss practical implementations of the receding horizon approaches via simulation.
引用
收藏
页码:1380 / 1385
页数:6
相关论文
共 50 条