School of Mathematics, University of Leeds, Leeds, UK.
J R Soc Interface. 2023 Jul;20(204):20230127. doi: 10.1098/rsif.2023.0127. Epub 2023 Jul 26.
Decision-making and movement of single animals or group of animals are often treated and investigated as separate processes. However, many decisions are taken while moving in a given space. In other words, both processes are optimized at the same time, and optimal decision-making processes are only understood in the light of movement constraints. To fully understand the rationale of decisions embedded in an environment (and therefore the underlying evolutionary processes), it is instrumental to develop theories of spatial decision-making. Here, we present a framework specifically developed to address this issue by the means of artificial neural networks and genetic algorithms. Specifically, we investigate a simple task in which single agents need to learn to explore their square arena without leaving its boundaries. We show that agents evolve by developing increasingly optimal strategies to solve a spatially embedded learning task while not having an initial arbitrary model of movements. The process allows the agents to learn how to move (i.e. by avoiding the arena walls) in order to make increasingly optimal decisions (improving their exploration of the arena). Ultimately, this framework makes predictions of possibly optimal behavioural strategies for tasks combining learning and movement.
动物个体或群体的决策和运动通常被视为两个独立的过程进行处理和研究。然而,许多决策都是在给定的空间中移动时做出的。换句话说,这两个过程是同时优化的,只有在考虑到运动约束的情况下,才能理解最优的决策过程。为了充分理解嵌入在环境中的决策的基本原理(因此是潜在的进化过程),开发空间决策理论是很有帮助的。在这里,我们提出了一个框架,通过人工神经网络和遗传算法来专门解决这个问题。具体来说,我们研究了一个简单的任务,其中单个代理需要学习在不离开其边界的情况下探索它们的正方形竞技场。我们表明,代理通过开发越来越优化的策略来解决空间嵌入的学习任务,而无需初始任意的运动模型来进化。这个过程允许代理学习如何移动(即避免竞技场的墙壁),以便做出越来越优化的决策(提高他们对竞技场的探索)。最终,这个框架对结合学习和运动的任务的可能的最优行为策略做出了预测。