Can Chocolate be Money as a Medium of Exchange? Belief Learning vs. Reinforcement Learning

被引:0
|
作者
Toshiji Kawagoe
机构
[1] Future University,
关键词
money; nondurable goods; search; learning; agent-based simulation;
D O I
10.14441/eier.5.279
中图分类号
学科分类号
摘要
In this paper, a variant of the Kiyotaki and Wright model of emergence of money is investigated. In the model, all goods have a different durability rather than the storage cost assumed in the Kiyotaki and Wright model. Among three goods, two goods are infinitely durable but the third is nondurable. Under a certain condition, nondurable goods can be money as a medium of exchange. But the stationary equilibrium condition may be sensitive to the time evolution of the distribution of goods that each agent holds in its inventory. We test, with several learning models using different levels of information, whether or not the stationary equilibrium in this economy is attainable if the distribution of goods is far from the equilibrium distribution. The belief learning with full information model outperforms other models. The stationary equilibrium is never attained by the belief learning with partial information model. Agents learn not to use nondurable goods as money by the reinforcement learning model which does not use information on the distribution of goods. It is surprising that providing partial information on the distribution of goods is rather detrimental for attaining emergence of a nondurable goods money.
引用
收藏
页码:279 / 292
页数:13
相关论文
共 50 条
  • [1] Can Chocolate be Money as a Medium of Exchange? Belief Learning vs. Reinforcement Learning
    Kawagoe, Toshiji
    EVOLUTIONARY AND INSTITUTIONAL ECONOMICS REVIEW, 2009, 5 (02) : 279 - 292
  • [2] Learning a Belief Representation for Delayed Reinforcement Learning
    Liotet, Pierre
    Venneri, Erick
    Restelli, Marcello
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [3] Deep Reinforcement Learning with DQN vs. PPO in VizDoom
    Zakharenkov, Anton
    Makarov, Ilya
    21ST IEEE INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND INFORMATICS (CINTI), 2021, : 137 - 142
  • [4] Deep Learning vs. Discrete Reinforcement Learning for Adaptive Traffic Signal Control
    Shabestary, Soheil Mohamad Alizadeh
    Abdulhai, Baher
    2018 21ST INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2018, : 286 - 293
  • [5] Reinforcement-based vs. belief-based learning models in experimental asymmetric-information games
    Feltovich, N
    ECONOMETRICA, 2000, 68 (03) : 605 - 641
  • [6] REINFORCEMENT LEARNING vs. A* IN A ROLE PLAYING GAME BENCHMARK SCENARIO
    Alvarez-Ramos, C. M.
    Santos, M.
    Lopez, V.
    COMPUTATIONAL INTELLIGENCE: FOUNDATIONS AND APPLICATIONS: PROCEEDINGS OF THE 9TH INTERNATIONAL FLINS CONFERENCE, 2010, 4 : 644 - 650
  • [7] Optimization vs. Reinforcement Learning for Wirelessly Powered Sensor Networks
    Ozcelikkale, Ayca
    Koseoglu, Mehmet
    Srivastava, Mani
    2018 IEEE 19TH INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATIONS (SPAWC), 2018, : 286 - 290
  • [8] Timed Process Interventions: Causal Inference vs. Reinforcement Learning
    Weytjens, Hans
    Verbeke, Wouter
    De Weerdt, Jochen
    BUSINESS PROCESS MANAGEMENT WORKSHOPS, BPM 2023, 2024, 492 : 245 - 258
  • [9] Learning vs. minding: How subjective costs can mask motor learning
    Healy, Chadwick M.
    Berniker, Max
    Ahmed, Alaa A.
    PLOS ONE, 2023, 18 (03):
  • [10] Belief Reward Shaping in Reinforcement Learning
    Marom, Ofir
    Rosman, Benjamin
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 3762 - 3769