Emergence of chemotactic strategies with multi-agent reinforcement learning

被引:0
|
作者
Tovey, Samuel [1 ]
Lohrmann, Christoph [1 ]
Holm, Christian [1 ]
机构
[1] Univ Stuttgart, Inst Computat Phys, D-70569 Stuttgart, Germany
来源
MACHINE LEARNING-SCIENCE AND TECHNOLOGY | 2024年 / 5卷 / 03期
关键词
reinforcement learning; microrobotics; chemotaxis; active matter; biophysics; MOTION;
D O I
10.1088/2632-2153/ad5f73
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reinforcement learning (RL) is a flexible and efficient method for programming micro-robots in complex environments. Here we investigate whether RL can provide insights into biological systems when trained to perform chemotaxis. Namely, whether we can learn about how intelligent agents process given information in order to swim towards a target. We run simulations covering a range of agent shapes, sizes, and swim speeds to determine if the physical constraints on biological swimmers, namely Brownian motion, lead to regions where reinforcement learners' training fails. We find that the RL agents can perform chemotaxis as soon as it is physically possible and, in some cases, even before the active swimming overpowers the stochastic environment. We study the efficiency of the emergent policy and identify convergence in agent size and swim speeds. Finally, we study the strategy adopted by the RL algorithm to explain how the agents perform their tasks. To this end, we identify three emerging dominant strategies and several rare approaches taken. These strategies, whilst producing almost identical trajectories in simulation, are distinct and give insight into the possible mechanisms behind which biological agents explore their environment and respond to changing conditions.
引用
收藏
页数:20
相关论文
共 50 条
  • [41] Multi-agent Cooperative Search based on Reinforcement Learning
    Sun, Yinjiang
    Zhang, Rui
    Liang, Wenbao
    Xu, Cheng
    PROCEEDINGS OF 2020 3RD INTERNATIONAL CONFERENCE ON UNMANNED SYSTEMS (ICUS), 2020, : 891 - 896
  • [42] Robust multi-agent reinforcement learning for noisy environments
    Chen, Xinning
    Liu, Xuan
    Luo, Canhui
    Yin, Jiangjin
    PEER-TO-PEER NETWORKING AND APPLICATIONS, 2022, 15 (02) : 1045 - 1056
  • [43] Centralized reinforcement learning for multi-agent cooperative environments
    Chengxuan Lu
    Qihao Bao
    Shaojie Xia
    Chongxiao Qu
    Evolutionary Intelligence, 2024, 17 : 267 - 273
  • [44] Hierarchical Multi-Agent Training Based on Reinforcement Learning
    Wang, Guanghua
    Li, Wenjie
    Wu, Zhanghua
    Guo, Xian
    2024 9TH ASIA-PACIFIC CONFERENCE ON INTELLIGENT ROBOT SYSTEMS, ACIRS, 2024, : 11 - 18
  • [45] Multi-agent Reinforcement Learning for Urban Projects Planning
    Khelifa, Boudjemaa
    Laouar, Mohamed Ridda
    PROCEEDINGS OF THE 7TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING AND NEW TECHNOLOGIES (ICSENT '18), 2018,
  • [46] Modelling Stock Markets by Multi-agent Reinforcement Learning
    Lussange, Johann
    Lazarevich, Ivan
    Bourgeois-Gironde, Sacha
    Palminteri, Stefano
    Gutkin, Boris
    COMPUTATIONAL ECONOMICS, 2021, 57 (01) : 113 - 147
  • [47] A reinforcement learning scheme for a multi-agent card game
    Fujita, H
    Matsuno, Y
    Ishii, S
    2003 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS, VOLS 1-5, CONFERENCE PROCEEDINGS, 2003, : 4071 - 4078
  • [48] A review of cooperative multi-agent deep reinforcement learning
    Oroojlooy, Afshin
    Hajinezhad, Davood
    APPLIED INTELLIGENCE, 2023, 53 (11) : 13677 - 13722
  • [49] A Hybrid Interaction Model for Multi-Agent Reinforcement Learning
    Guisi, Douglas M.
    Ribeiro, Richardson
    Teixeira, Marcelo
    Borges, Andre P.
    Dosciatti, Eden R.
    Enembreck, Fabricio
    PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON ENTERPRISE INFORMATION SYSTEMS, VOL 2 (ICEIS), 2016, : 54 - 61
  • [50] Rationality of reward sharing in multi-agent reinforcement learning
    Kazuteru Miyazaki
    Shigenobu Kobayashi
    New Generation Computing, 2001, 19 : 157 - 172