Emergence of chemotactic strategies with multi-agent reinforcement learning

被引:0
|
作者
Tovey, Samuel [1 ]
Lohrmann, Christoph [1 ]
Holm, Christian [1 ]
机构
[1] Univ Stuttgart, Inst Computat Phys, D-70569 Stuttgart, Germany
来源
MACHINE LEARNING-SCIENCE AND TECHNOLOGY | 2024年 / 5卷 / 03期
关键词
reinforcement learning; microrobotics; chemotaxis; active matter; biophysics; MOTION;
D O I
10.1088/2632-2153/ad5f73
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reinforcement learning (RL) is a flexible and efficient method for programming micro-robots in complex environments. Here we investigate whether RL can provide insights into biological systems when trained to perform chemotaxis. Namely, whether we can learn about how intelligent agents process given information in order to swim towards a target. We run simulations covering a range of agent shapes, sizes, and swim speeds to determine if the physical constraints on biological swimmers, namely Brownian motion, lead to regions where reinforcement learners' training fails. We find that the RL agents can perform chemotaxis as soon as it is physically possible and, in some cases, even before the active swimming overpowers the stochastic environment. We study the efficiency of the emergent policy and identify convergence in agent size and swim speeds. Finally, we study the strategy adopted by the RL algorithm to explain how the agents perform their tasks. To this end, we identify three emerging dominant strategies and several rare approaches taken. These strategies, whilst producing almost identical trajectories in simulation, are distinct and give insight into the possible mechanisms behind which biological agents explore their environment and respond to changing conditions.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] Emergence of linguistic conventions in multi-agent reinforcement learning
    Lipowska, Dorota
    Lipowski, Adam
    PLOS ONE, 2018, 13 (11):
  • [2] Learning competitive pricing strategies by multi-agent reinforcement learning
    Kutschinski, E
    Uthmann, T
    Polani, D
    JOURNAL OF ECONOMIC DYNAMICS & CONTROL, 2003, 27 (11-12): : 2207 - 2218
  • [3] Diffusion Policies as Multi-Agent Reinforcement Learning Strategies
    Geng, Jinkun
    Liang, Xiubo
    Wang, Hongzhi
    Zhao, Yu
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT III, 2023, 14256 : 356 - 364
  • [4] Multi-agent Reinforcement Learning using strategies and voting
    Partalas, Loannis
    Feneris, Loannis
    Vlahavas, Loannis
    19TH IEEE INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, VOL II, PROCEEDINGS, 2007, : 318 - 324
  • [5] Multi-Agent Deep Reinforcement Learning with Human Strategies
    Thanh Nguyen
    Ngoc Duy Nguyen
    Nahavandi, Saeid
    2019 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL TECHNOLOGY (ICIT), 2019, : 1357 - 1362
  • [6] Learning Distinct Strategies for Heterogeneous Cooperative Multi-agent Reinforcement Learning
    Wan, Kejia
    Xu, Xinhai
    Li, Yuan
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT IV, 2021, 12894 : 544 - 555
  • [7] Multi-Agent Reinforcement Learning
    Stankovic, Milos
    2016 13TH SYMPOSIUM ON NEURAL NETWORKS AND APPLICATIONS (NEUREL), 2016, : 43 - 43
  • [8] Decomposing Synthesized Strategies for Reactive Multi-agent Reinforcement Learning
    Zhu, Chenyang
    Zhu, Jinyu
    Cal, Yujie
    Wang, Fang
    THEORETICAL ASPECTS OF SOFTWARE ENGINEERING, TASE 2023, 2023, 13931 : 59 - 76
  • [9] Robust Multi-agent Patrolling Strategies Using Reinforcement Learning
    Lauri, Fabrice
    Koukam, Abderrafiaa
    SWARM INTELLIGENCE BASED OPTIMIZATION (ICSIBO 2014), 2014, 8472 : 157 - 165
  • [10] Flexible Exploration Strategies in Multi-Agent Reinforcement Learning for Instability by Mutual Learning
    Miyashita, Yuki
    Sugawara, Toshiharu
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 579 - 584