Layton Oliver W, Powell Nathaniel, Steinmetz Scott T, Fajen Brett R
Department of Computer Science, Colby College, Waterville, ME, United States of America.
Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, United States of America.
Bioinspir Biomim. 2022 Jun 9;17(4). doi: 10.1088/1748-3190/ac709b.
Optic flow provides rich information about world-relative self-motion and is used by many animals to guide movement. For example, self-motion along linear, straight paths without eye movements, generates optic flow that radiates from a singularity that specifies the direction of travel (heading). Many neural models of optic flow processing contain heading detectors that are tuned to the position of the singularity, the design of which is informed by brain area MSTd of primate visual cortex that has been linked to heading perception. Such biologically inspired models could be useful for efficient self-motion estimation in robots, but existing systems are tailored to the limited scenario of linear self-motion and neglect sensitivity to self-motion along more natural curvilinear paths. The observer in this case experiences more complex motion patterns, the appearance of which depends on the radius of the curved path (path curvature) and the direction of gaze. Indeed, MSTd neurons have been shown to exhibit tuning to optic flow patterns other than radial expansion, a property that is rarely captured in neural models. We investigated in a computational model whether a population of MSTd-like sensors tuned to radial, spiral, ground, and other optic flow patterns could support the accurate estimation of parameters describing both linear and curvilinear self-motion. We used deep learning to decode self-motion parameters from the signals produced by the diverse population of MSTd-like units. We demonstrate that this system is capable of accurately estimating curvilinear path curvature, clockwise/counterclockwise sign, and gaze direction relative to the path tangent in both synthetic and naturalistic videos of simulated self-motion. Estimates remained stable over time while rapidly adapting to dynamic changes in the observer's curvilinear self-motion. Our results show that coupled biologically inspired and artificial neural network systems hold promise as a solution for robust vision-based self-motion estimation in robots.
光流提供了关于相对于世界的自身运动的丰富信息,许多动物利用它来引导运动。例如,沿着直线、直线路径的自身运动且无眼球运动时,会产生从一个奇点辐射出的光流,该奇点指定了行进方向(航向)。许多光流处理的神经模型包含航向探测器,这些探测器被调整到奇点的位置,其设计受到与航向感知相关的灵长类动物视觉皮层脑区MSTd的影响。这种受生物启发的模型对于机器人中高效的自身运动估计可能是有用的,但现有系统是针对线性自身运动的有限场景量身定制的,而忽略了对沿更自然的曲线路径的自身运动的敏感性。在这种情况下,观察者会经历更复杂的运动模式,其外观取决于弯曲路径的半径(路径曲率)和注视方向。事实上,已经证明MSTd神经元对除径向扩展之外的光流模式也表现出调谐,这一特性在神经模型中很少被捕捉到。我们在一个计算模型中研究了一群调整到径向、螺旋、地面和其他光流模式的类似MSTd的传感器是否能够支持对描述线性和曲线自身运动的参数进行准确估计。我们使用深度学习从不同群体的类似MSTd单元产生的信号中解码自身运动参数。我们证明,在模拟自身运动的合成视频和自然视频中,该系统能够准确估计曲线路径曲率、顺时针/逆时针方向以及相对于路径切线的注视方向。估计值随时间保持稳定,同时能快速适应观察者曲线自身运动的动态变化。我们的结果表明,结合受生物启发的和人工神经网络系统有望成为机器人中基于视觉的强大自身运动估计的解决方案。