Gerhard Marcel, Jayaram Ashreya, Fischer Andreas, Speck Thomas
Institut für Physik, Johannes Gutenberg-Universität Mainz, Staudingerweg 7-9, 55128 Mainz, Germany.
Phys Rev E. 2021 Nov;104(5-1):054614. doi: 10.1103/PhysRevE.104.054614.
We numerically study active Brownian particles that can respond to environmental cues through a small set of actions (switching their motility and turning left or right with respect to some direction) which are motivated by recent experiments with colloidal self-propelled Janus particles. We employ reinforcement learning to find optimal mappings between the state of particles and these actions. Specifically, we first consider a predator-prey situation in which prey particles try to avoid a predator. Using as reward the squared distance from the predator, we discuss the merits of three state-action sets and show that turning away from the predator is the most successful strategy. We then remove the predator and employ as collective reward the local concentration of signaling molecules exuded by all particles and show that aligning with the concentration gradient leads to chemotactic collapse into a single cluster. Our results illustrate a promising route to obtain local interaction rules and design collective states in active matter.
我们对活性布朗粒子进行了数值研究,这些粒子能够通过一小组动作(切换其运动性并相对于某个方向向左或向右转)对环境线索做出反应,这些动作是受最近关于胶体自驱动Janus粒子的实验启发而来的。我们采用强化学习来寻找粒子状态与这些动作之间的最优映射。具体而言,我们首先考虑一种捕食者 - 猎物的情境,其中猎物粒子试图躲避捕食者。以与捕食者的平方距离作为奖励,我们讨论了三种状态 - 动作集的优点,并表明远离捕食者是最成功的策略。然后我们移除捕食者,并将所有粒子分泌的信号分子的局部浓度用作集体奖励,结果表明与浓度梯度对齐会导致趋化性坍缩成单个簇。我们的结果说明了一条获得局部相互作用规则并设计活性物质中集体状态的有前景的途径。