Modeling Biological Agents Beyond the Reinforcement-Learning Paradigm

被引:5
|
作者
Georgeon, Olivier L. [1 ]
Casado, Remi C. [1 ]
Matignon, Laetitia A. [1 ]
机构
[1] Univ Lyon 1, LIRIS, UMR5205, F-69622 Villeurbanne, France
关键词
D O I
10.1016/j.procs.2015.12.179
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
It is widely acknowledged that biological beings (animals) are not Markov: modelers generally do not model them as agents receiving a complete representation of their environment's state in input (except perhaps in simple controlled tasks). In this paper, we claim that biological beings generally cannot recognize rewarding Markov states of their environment either. Therefore, we model them as agents trying to perform rewarding interactions with their environment (interaction-driven tasks), but not as agents trying to reach rewarding states (state-driven tasks). We review two interaction-driven tasks: the AB and AABB task, and implement a non-Markov Reinforcement-Learning (RL) algorithm based upon historical sequences and Q-learning. Results show that this RL algorithm takes significantly longer than a constructivist algorithm implemented previously by Georgeon, Ritter, & Haynes (2009). This is because the constructivist algorithm directly learns and repeats hierarchical sequences of interactions, whereas the RL algorithm spends time learning Q-values. Along with theoretical arguments, these results support the constructivist paradigm for modeling biological agents.
引用
收藏
页码:17 / 22
页数:6
相关论文
共 50 条
  • [1] Coevolutionary networks of reinforcement-learning agents
    Kianercy, Ardeshir
    Galstyan, Aram
    PHYSICAL REVIEW E, 2013, 88 (01):
  • [2] ASQ-IT: Interactive explanations for reinforcement-learning agents
    Amitai, Yotam
    Amir, Ofra
    Avni, Guy
    ARTIFICIAL INTELLIGENCE, 2024, 335
  • [3] BUILDING AN ARTIFICIAL STOCK MARKET POPULATED BY REINFORCEMENT-LEARNING AGENTS
    Rutkauskas, Aleksandras Vytautas
    Ramanauskas, Tomas
    JOURNAL OF BUSINESS ECONOMICS AND MANAGEMENT, 2009, 10 (04) : 329 - 341
  • [4] A Study on Reinforcement-learning Agents with Personality through the Implementation of Character Parameters
    Ando, Daichi
    Iwashita, Shino
    2020 JOINT 11TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING AND INTELLIGENT SYSTEMS AND 21ST INTERNATIONAL SYMPOSIUM ON ADVANCED INTELLIGENT SYSTEMS (SCIS-ISIS), 2020, : 410 - 413
  • [5] Developing the Reinforcement-Learning Child Agents for Measuring Play and Learning Performance in Kindergarten Design
    Lee, Jin
    Hong, Seung Wan
    ECAADE 2023 DIGITAL DESIGN RECONSIDERED, VOL 1, 2023, : 69 - 78
  • [6] A reinforcement-learning approach to efficient communication
    Kageback, Mikael
    Carlsson, Emil
    Dubhashi, Devdatt
    Sayeed, Asad
    PLOS ONE, 2020, 15 (07):
  • [7] A reinforcement-learning approach to color quantization
    Chou, CH
    Su, MC
    Chang, F
    Lai, E
    Proceedings of the Sixth IASTED International Conference on Intelligent Systems and Control, 2004, : 94 - 99
  • [8] A reinforcement-learning account of Tourette syndrome
    Maia, T.
    EUROPEAN PSYCHIATRY, 2017, 41 : S10 - S10
  • [9] A Reinforcement-Learning Approach to Color Quantization
    Chou, Chien-Hsing
    Su, Mu-Chun
    Zhao, Yu-Xiang
    Hsu, Fu-Hau
    JOURNAL OF APPLIED SCIENCE AND ENGINEERING, 2011, 14 (02): : 141 - 150
  • [10] Moderate confirmation bias enhances decision-making in groups of reinforcement-learning agents
    Bergerot, Clemence
    Barfuss, Wolfram
    Romanczuk, Pawel
    PLOS COMPUTATIONAL BIOLOGY, 2024, 20 (09)