Schulte Patrick, Zeil Jochen, Stürzl Wolfgang
Institute of Robotics and Mechatronics, German Aerospace Center (DLR), Wessling, Germany.
Research School of Biology, The Australian National University, Canberra, Australia.
Biol Cybern. 2019 Aug;113(4):439-451. doi: 10.1007/s00422-019-00800-1. Epub 2019 May 10.
Wasps and bees perform learning flights when leaving their nest or food locations for the first time during which they acquire visual information that enables them to return successfully. Here we present and test a set of simple control rules underlying the execution of learning flights that closely mimic those performed by ground-nesting wasps. In the simplest model, we assume that the angle between flight direction and the nest direction as seen from the position of the insect is constant and only flips sign when pivoting direction around the nest is changed, resulting in a concatenation of piecewise defined logarithmic spirals. We then added characteristic properties of real learning flights, such as head saccades and the condition that the nest entrance within the visual field is kept nearly constant to describe the development of a learning flight in a head-centered frame of reference, assuming that the retinal position of the nest is known. We finally implemented a closed-loop simulation of learning flights based on a small set of visual control rules. The visual input for this model are rendered views generated from 3D reconstructions of natural wasp nesting sites, and the retinal nest position is controlled by means of simple template-based tracking. We show that naturalistic paths can be generated without knowledge of the absolute distance to the nest or of the flight speed. We demonstrate in addition that nest-tagged views recorded during such simulated learning flights are sufficient for a homing agent to pinpoint the goal, by identifying nest direction when encountering familiar views. We discuss how the information acquired during learning flights close to the nest can be integrated with long-range homing models.
黄蜂和蜜蜂在首次离开巢穴或食物所在地时会进行学习飞行,在此过程中它们获取视觉信息,以便能够成功返回。在此,我们提出并测试了一组简单的控制规则,这些规则是学习飞行执行过程的基础,且与地栖黄蜂的飞行行为极为相似。在最简单的模型中,我们假设从昆虫位置看飞行方向与巢穴方向之间的夹角是恒定的,并且只有在围绕巢穴的枢轴方向改变时才会改变符号,从而形成分段定义的对数螺旋线的串联。然后,我们添加了真实学习飞行的特征属性,例如头部扫视以及视野内巢穴入口保持近乎恒定的条件,以便在以头部为中心的参考系中描述学习飞行的发展情况,前提是已知巢穴在视网膜上的位置。最后,我们基于一小组视觉控制规则实现了学习飞行的闭环模拟。该模型的视觉输入是由自然黄蜂筑巢地点的三维重建生成的渲染视图,并且通过基于简单模板的跟踪来控制视网膜上巢穴的位置。我们表明,无需知道到巢穴的绝对距离或飞行速度,就能生成自然主义路径。此外,我们还证明,在这种模拟学习飞行过程中记录的带有巢穴标记的视图,足以让归巢智能体通过在遇到熟悉视图时识别巢穴方向来精准定位目标。我们讨论了在靠近巢穴的学习飞行过程中获取的信息如何与远程归巢模型相结合。