Faster Optimistic Online Mirror Descent for Extensive-Form Games

被引:0
|
作者
Jiang, Huacong [1 ]
Liu, Weiming [1 ]
Li, Bin [1 ]
机构
[1] Univ Sci & Technol China, Hefei, Peoples R China
基金
中国国家自然科学基金;
关键词
Adaptive optimistic online mirror descent; Extensive-form games; Nash equilibrium; Counterfactual regret minimization; POKER;
D O I
10.1007/978-3-031-20862-1_7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Online Mirror Descent (OMD) is a kind of regret minimization algorithms for Online Convex Optimization (OCO). Recently, they are applied to solve Extensive-Form Games (EFGs) for approximating Nash equilibrium. Especially, optimistic variants of OMD are developed, which have a better theoretical convergence rate compared to common regret minimization algorithms, e.g., Counterfactual Regret Minimization (CFR), for EFGs. However, despite the theoretical advantage, existing OMD and their optimistic variants have been shown to converge to a Nash equilibrium slower than the state-of-the-art (SOTA) CFR variants in practice. The reason for the inferior performance may be that they usually use constant regularizers whose parameters have to be chosen at the beginning. Inspired by the adaptive nature of CFRs, in this paper, an adaptive method is presented to speed up the optimistic variants of OMD. Based on this method, Adaptive Optimistic OMD (Ada-OOMD) for EFGs is proposed. In this algorithm, the regularizers can adapt to real-time regrets, thus the algorithm may converge faster in practice. Experimental results show that Ada-OOMD is at least two orders of magnitude faster than existing optimistic OMD algorithms. In some extensive-form games, such as Kuhn poker and Goofspiel, the convergence speed of Ada-OOMD even exceeds the SOTA CFRs. https://github.com/github-jhc/ada-oomd
引用
收藏
页码:90 / 103
页数:14
相关论文
共 50 条
  • [31] Converting MA-PDDL to Extensive-Form Games
    Kovacs, Daniel L.
    Dobrowiecki, Tadeusz P.
    ACTA POLYTECHNICA HUNGARICA, 2013, 10 (08) : 27 - 47
  • [32] Regret-Based Pruning in Extensive-Form Games
    Brown, Noam
    Sandholm, Tuomas
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 28 (NIPS 2015), 2015, 28
  • [33] Theoretical and Practical Advances on Smoothing for Extensive-Form Games
    Kroer, Christian
    Waugh, Kevin
    Kilinc-Karzan, Fatma
    Sandholm, Tuomas
    EC'17: PROCEEDINGS OF THE 2017 ACM CONFERENCE ON ECONOMICS AND COMPUTATION, 2017, : 693 - 693
  • [34] Safe Search for Stackelberg Equilibria in Extensive-Form Games
    Ling, Chun Kai
    Brown, Noam
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 5541 - 5548
  • [35] Incremental Strategy Generation for Stackelberg Equilibria in Extensive-Form Games
    Cerny, Jakub
    Bosansky, Branislav
    Kiekintveld, Christopher
    ACM EC'18: PROCEEDINGS OF THE 2018 ACM CONFERENCE ON ECONOMICS AND COMPUTATION, 2018, : 151 - 168
  • [36] Near-Optimal Φ-Regret Learning in Extensive-Form Games
    Anagnostides, Ioannis
    Farina, Gabriele
    Sandholm, Tuomas
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202 : 814 - 839
  • [37] Convergence of best-response dynamics in extensive-form games
    Xu, Zibo
    JOURNAL OF ECONOMIC THEORY, 2016, 162 : 21 - 54
  • [38] Sequence-Form Algorithm for Computing Stackelberg Equilibria in Extensive-Form Games
    Bosansky, Branislav
    Cermak, Jiri
    PROCEEDINGS OF THE TWENTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2015, : 805 - 811
  • [39] Trembling-Hand Perfection in Extensive-Form Games with Commitment
    Farina, Gabriele
    Marchesi, Alberto
    Kroer, Christian
    Gatti, Nicola
    Sandholm, Tuomas
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 233 - 239
  • [40] Logit Learning by Valuation in Extensive-Form Games with Simultaneous Moves
    Castiglione, Jason
    Arslan, Gurdal
    2023 62ND IEEE CONFERENCE ON DECISION AND CONTROL, CDC, 2023, : 1213 - 1218