Robustness and Sample Complexity of Model-Based MARL for General-Sum Markov Games

被引:2
|
作者
Subramanian, Jayakumar [1 ]
Sinha, Amit [2 ]
Mahajan, Aditya [2 ]
机构
[1] Adobe Inc, Media & Data Sci Res Lab, Digital Experience Cloud, Noida, Uttar Pradesh, India
[2] McGill Univ, Dept Elect & Comp Engn, Montreal, PQ, Canada
关键词
DYNAMIC OLIGOPOLY; STATIONARY EQUILIBRIA; STOCHASTIC GAMES; APPROXIMATIONS; COMPETITION; ESTIMATORS;
D O I
10.1007/s13235-023-00490-2
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Multi-agent reinforcement learning (MARL) is often modeled using the framework of Markov games (also called stochastic games or dynamic games). Most of the existing literature on MARL concentrates on zero-sum Markov games but is not applicable to general-sum Markov games. It is known that the best response dynamics in general-sum Markov games are not a contraction. Therefore, different equilibria in general-sum Markov games can have different values. Moreover, the Q-function is not sufficient to completely characterize the equilibrium. Given these challenges, model-based learning is an attractive approach for MARL in general-sum Markov games. In this paper, we investigate the fundamental question of sample complexity for model-based MARL algorithms in general-sum Markov games. We show two results. We first use Hoeffding inequality-based bounds to show that O tilde ((1 - gamma )(-4)alpha (-2)) samples per state-action pair are sufficient to obtain a alpha-approximate Markov perfect equilibrium with high probability, where gamma is the discount factor, and the O tilde (middot) notation hides logarithmic terms. We then use Bernstein inequality-based bounds to show that O tilde ((1- gamma )(-1)alpha(-2)) samples are sufficient. To obtain these results, we study the robustness of Markov perfect equilibrium to model approximations. We show that the Markov perfect equilibrium of an approximate (or perturbed) game is always an approximate Markov perfect equilibrium of the original game and provide explicit bounds on the approximation error. We illustrate the results via a numerical example.
引用
收藏
页码:56 / 88
页数:33
相关论文
共 50 条
  • [21] Model-Based Reinforcement Learning for Offline Zero-Sum Markov Games
    Yan, Yuling
    Li, Gen
    Chen, Yuxin
    Fan, Jianqing
    OPERATIONS RESEARCH, 2024, 72 (06) : 2430 - 2445
  • [22] General-sum stochastic games: Verifiability conditions for Nash equilibria
    Prasad, H. L.
    Bhatnagar, S.
    AUTOMATICA, 2012, 48 (11) : 2923 - 2930
  • [23] OPTIMISTIC GRADIENT DESCENT ASCENT IN GENERAL-SUM BILINEAR GAMES
    DE Montbrun, Etienne
    Renault, Jerome
    JOURNAL OF DYNAMICS AND GAMES, 2024,
  • [24] A new learning algorithm for cooperative agents in general-sum games
    Song, Mei-Ping
    An, Ju-Bai
    Chen, Rong
    PROCEEDINGS OF 2007 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-7, 2007, : 50 - 54
  • [25] Safely Using Predictions in General-Sum Normal Form Games
    Damer, Steven
    Gini, Maria
    AAMAS'17: PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2017, : 924 - 932
  • [26] Nash Q-learning for general-sum stochastic games
    Hu, JL
    Wellman, MP
    JOURNAL OF MACHINE LEARNING RESEARCH, 2004, 4 (06) : 1039 - 1069
  • [27] Policy Invariance under Reward Transformations for General-Sum Stochastic Games
    Lu, Xiaosong
    Schwartz, Howard M.
    Givigi, Sidney N., Jr.
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2011, 41 : 397 - 406
  • [28] Identifying and Responding to Cooperative Actions in General-sum Normal Form Games
    Damer, Steven
    AAMAS'17: PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2017, : 1826 - 1827
  • [29] Learning Stationary Correlated Equilibria in Constrained General-Sum Stochastic Games
    Hakami, Vesal
    Dehghan, Mehdi
    IEEE TRANSACTIONS ON CYBERNETICS, 2016, 46 (07) : 1640 - 1654
  • [30] Learning to Correlate in Multi-Player General-Sum Sequential Games
    Celli, Andrea
    Marchesi, Alberto
    Bianchi, Tommaso
    Gatti, Nicola
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32