Aix-Marseille University, CNRS, ISM, Marseille, France.
CRIStAL Laboratory, CNRS UMR, 9189, University of Lille, 59650 Lille, France.
J R Soc Interface. 2021 Sep;18(182):20210567. doi: 10.1098/rsif.2021.0567. Epub 2021 Sep 8.
Honeybees foraging and recruiting nest-mates by performing the waggle dance need to be able to gauge the flight distance to the food source regardless of the wind and terrain conditions. Previous authors have hypothesized that the foragers' visual odometer mathematically integrates the angular velocity of the ground image sweeping backward across their ventral viewfield, known as translational optic flow. The question arises as to how mathematical integration of optic flow (usually expressed in radians/s) can reliably encode distances, regardless of the height and speed of flight. The vertical self-oscillatory movements observed in honeybees trigger expansions and contractions of the optic flow vector field, yielding an additional visual cue called optic flow divergence. We have developed a self-scaled model for the visual odometer in which the translational optic flow is scaled by the visually estimated current clearance from the ground. In simulation, this model, which we have called SOFIa, was found to be reliable in a large range of flight trajectories, terrains and wind conditions. It reduced the statistical dispersion of the estimated flight distances approximately 10-fold in comparison with the mathematically integrated raw optic flow model. The SOFIa model can be directly implemented in robotic applications based on minimalistic visual equipment.
蜜蜂在觅食和招募巢伴时,需要能够在不受风和地形条件影响的情况下,估计到食物源的飞行距离。以前的作者假设,觅食者的视觉里程计通过数学方式整合了在其腹侧视野中向后扫过的地面图像的角速度,即平移光流。问题是,无论飞行的高度和速度如何,光流(通常以弧度/秒表示)的数学积分如何能够可靠地编码距离。在蜜蜂中观察到的垂直自激运动引发了光流矢量场的扩展和收缩,产生了另一个称为光流发散的额外视觉线索。我们开发了一个视觉里程计的自缩放模型,其中平移光流由从地面视觉估计的当前间隙进行缩放。在模拟中,我们称之为 SOFIa 的这个模型在很大范围的飞行轨迹、地形和风向条件下都是可靠的。与原始的数学积分光流模型相比,它将估计的飞行距离的统计分散度降低了约 10 倍。SOFIa 模型可以直接在基于极简视觉设备的机器人应用中实现。