Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.
Nat Neurosci. 2024 Nov;27(11):2207-2217. doi: 10.1038/s41593-024-01766-5. Epub 2024 Oct 3.
Many animals rely on persistent internal representations of continuous variables for working memory, navigation, and motor control. Existing theories typically assume that large networks of neurons are required to maintain such representations accurately; networks with few neurons are thought to generate discrete representations. However, analysis of two-photon calcium imaging data from tethered flies walking in darkness suggests that their small head-direction system can maintain a surprisingly continuous and accurate representation. We thus ask whether it is possible for a small network to generate a continuous, rather than discrete, representation of such a variable. We show analytically that even very small networks can be tuned to maintain continuous internal representations, but this comes at the cost of sensitivity to noise and variations in tuning. This work expands the computational repertoire of small networks, and raises the possibility that larger networks could represent more and higher-dimensional variables than previously thought.
许多动物依赖于持续的内部表示来维持工作记忆、导航和运动控制等连续变量。现有的理论通常假设需要大量的神经元网络来准确地维持这些表示;而神经元较少的网络则被认为会产生离散的表示。然而,对在黑暗中行走的被束缚的果蝇的双光子钙成像数据的分析表明,它们的小头部方向系统可以保持令人惊讶的连续和准确的表示。因此,我们想知道,一个小网络是否有可能产生这样一个变量的连续而不是离散的表示。我们从理论上证明,即使是非常小的网络也可以被调整为维持连续的内部表示,但这是以对噪声和调谐变化的敏感性为代价的。这项工作扩展了小网络的计算能力,并且提出了一种可能性,即更大的网络可能代表更多和更高维的变量,这是以前未曾想到的。