Emergence of chemotactic strategies with multi-agent reinforcement learning

被引:0
|
作者
Tovey, Samuel [1 ]
Lohrmann, Christoph [1 ]
Holm, Christian [1 ]
机构
[1] Univ Stuttgart, Inst Computat Phys, D-70569 Stuttgart, Germany
来源
MACHINE LEARNING-SCIENCE AND TECHNOLOGY | 2024年 / 5卷 / 03期
关键词
reinforcement learning; microrobotics; chemotaxis; active matter; biophysics; MOTION;
D O I
10.1088/2632-2153/ad5f73
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reinforcement learning (RL) is a flexible and efficient method for programming micro-robots in complex environments. Here we investigate whether RL can provide insights into biological systems when trained to perform chemotaxis. Namely, whether we can learn about how intelligent agents process given information in order to swim towards a target. We run simulations covering a range of agent shapes, sizes, and swim speeds to determine if the physical constraints on biological swimmers, namely Brownian motion, lead to regions where reinforcement learners' training fails. We find that the RL agents can perform chemotaxis as soon as it is physically possible and, in some cases, even before the active swimming overpowers the stochastic environment. We study the efficiency of the emergent policy and identify convergence in agent size and swim speeds. Finally, we study the strategy adopted by the RL algorithm to explain how the agents perform their tasks. To this end, we identify three emerging dominant strategies and several rare approaches taken. These strategies, whilst producing almost identical trajectories in simulation, are distinct and give insight into the possible mechanisms behind which biological agents explore their environment and respond to changing conditions.
引用
收藏
页数:20
相关论文
共 50 条
  • [31] Learning to Communicate with Deep Multi-Agent Reinforcement Learning
    Foerster, Jakob N.
    Assael, Yannis M.
    de Freitas, Nando
    Whiteson, Shimon
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [32] Consensus Learning for Cooperative Multi-Agent Reinforcement Learning
    Xu, Zhiwei
    Zhang, Bin
    Li, Dapeng
    Zhang, Zeren
    Zhou, Guangchong
    Chen, Hao
    Fan, Guoliang
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 10, 2023, : 11726 - 11734
  • [33] Concept Learning for Interpretable Multi-Agent Reinforcement Learning
    Zabounidis, Renos
    Campbell, Joseph
    Stepputtis, Simon
    Hughes, Dana
    Sycara, Katia
    CONFERENCE ON ROBOT LEARNING, VOL 205, 2022, 205 : 1828 - 1837
  • [34] Learning structured communication for multi-agent reinforcement learning
    Sheng, Junjie
    Wang, Xiangfeng
    Jin, Bo
    Yan, Junchi
    Li, Wenhao
    Chang, Tsung-Hui
    Wang, Jun
    Zha, Hongyuan
    AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, 2022, 36 (02)
  • [35] Learning structured communication for multi-agent reinforcement learning
    Junjie Sheng
    Xiangfeng Wang
    Bo Jin
    Junchi Yan
    Wenhao Li
    Tsung-Hui Chang
    Jun Wang
    Hongyuan Zha
    Autonomous Agents and Multi-Agent Systems, 2022, 36
  • [36] Generalized learning automata for multi-agent reinforcement learning
    De Hauwere, Yann-Michael
    Vrancx, Peter
    Nowe, Ann
    AI COMMUNICATIONS, 2010, 23 (04) : 311 - 324
  • [37] Learning multi-agent search strategies
    Strens, MJA
    ADAPTIVE AGENTS AND MULTI-AGENT SYSTEMS II: ADAPTATION AND MULTI-AGENT LEARNING, 2005, 3394 : 245 - 259
  • [38] Multi-agent reinforcement learning for character control
    Li, Cheng
    Fussell, Levi
    Komura, Taku
    VISUAL COMPUTER, 2021, 37 (12): : 3115 - 3123
  • [39] Parallel and distributed multi-agent reinforcement learning
    Kaya, M
    Arslan, A
    PROCEEDINGS OF THE EIGHTH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS, 2001, : 437 - 441
  • [40] Reinforcement learning of multi-agent communicative acts
    Hoet S.
    Sabouret N.
    Revue d'Intelligence Artificielle, 2010, 24 (02) : 159 - 188