No-Regret Learning Supports Voters' Competence

被引:0
|
作者
Spelda, Petr [1 ,3 ]
Stritecky, Vit [1 ]
Symons, John [2 ]
机构
[1] Charles Univ Prague, Inst Polit Studies, Fac Social Sci, Dept Secur Studies, Prague, Czech Republic
[2] Univ Kansas, Dept Philosophy, Lawrence, KS USA
[3] Charles Univ Prague, Inst Polit Studies, Fac Social Sci, Dept Secur Studies, U Krize 8, Prague 5, Czech Republic
关键词
Jury theorems; meta-induction; no-regret learning; epistemic democracy; EPISTEMIC DEMOCRACY; SCIENCE; DISAGREEMENT; NETWORKS; FACT;
D O I
10.1080/02691728.2023.2252763
中图分类号
N09 [自然科学史]; B [哲学、宗教];
学科分类号
01 ; 0101 ; 010108 ; 060207 ; 060305 ; 0712 ;
摘要
Procedural justifications of democracy emphasize inclusiveness and respect and by doing so come into conflict with instrumental justifications that depend on voters' competence. This conflict raises questions about jury theorems and makes their standing in democratic theory contested. We show that a type of no-regret learning called meta-induction can help to satisfy the competence assumption without excluding voters or diverse opinion leaders on an a priori basis. Meta-induction assigns weights to opinion leaders based on their past predictive performance to determine the level of their inclusion in recommendations for voters. The weighting minimizes the difference between the performance of meta-induction and the best opinion leader in hindsight. The difference represents the regret of meta-induction whose minimization ensures that the recommendations are optimal in supporting voters' competence. Meta-induction has optimal truth-tracking properties that support voters' competence even if it is targeted by mis/disinformation and should be considered a tool for supporting democracy in hyper-plurality.
引用
收藏
页码:543 / 559
页数:17
相关论文
共 50 条
  • [1] Constrained no-regret learning
    Du, Ye
    Lehrer, Ehud
    JOURNAL OF MATHEMATICAL ECONOMICS, 2020, 88 : 16 - 24
  • [2] No-regret Reinforcement Learning
    Gopalan, Aditya
    2019 FIFTH INDIAN CONTROL CONFERENCE (ICC), 2019, : 16 - 16
  • [3] No-Regret Learning in Bayesian Games
    Hartline, Jason
    Syrgkanis, Vasilis
    Tardos, Eva
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 28 (NIPS 2015), 2015, 28
  • [4] Limits and limitations of no-regret learning in games
    Monnot, Barnabe
    Piliouras, Georgios
    KNOWLEDGE ENGINEERING REVIEW, 2017, 32
  • [5] On the convergence of no-regret learning in selfish routing
    Krichene, Walid
    Drighes, Benjamin
    Bayen, Alexandre
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 32 (CYCLE 2), 2014, 32 : 163 - 171
  • [6] No-Regret Learning in Dynamic Stackelberg Games
    Lauffer, Niklas
    Ghasemi, Mahsa
    Hashemi, Abolfazl
    Savas, Yagiz
    Topcu, Ufuk
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2024, 69 (03) : 1418 - 1431
  • [7] Unifying convergence and no-regret in multiagent learning
    Banerjee, Bikramjit
    Peng, Jing
    LEARNING AND ADAPTION IN MULTI-AGENT SYSTEMS, 2006, 3898 : 100 - 114
  • [8] Weighted Voting Via No-Regret Learning
    Haghtalab, Nika
    Noothigattu, Ritesh
    Procaccia, Ariel D.
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 1055 - 1062
  • [9] No-regret Exploration in Contextual Reinforcement Learning
    Modi, Aditya
    Tewari, Ambuj
    CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI 2020), 2020, 124 : 829 - 838
  • [10] A Reduction from Reinforcement Learning to No-Regret Online Learning
    Cheng, Ching-An
    des Combes, Remi Tachet
    Boots, Byron
    Gordon, Geoff
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 3514 - 3523