Secer Gorkem, Knierim James J, Cowan Noah J
Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA.
Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA.
Res Sq. 2024 Apr 15:rs.3.rs-4209280. doi: 10.21203/rs.3.rs-4209280/v1.
Representations of continuous variables are crucial to create internal models of the external world. A prevailing model of how the brain maintains these representations is given by continuous bump attractor networks (CBANs) in a broad range of brain functions across different areas, such as spatial navigation in hippocampal/entorhinal circuits and working memory in prefrontal cortex. Through recurrent connections, a CBAN maintains a persistent activity bump, whose peak location can vary along a neural space, corresponding to different values of a continuous variable. To track the value of a continuous variable changing over time, a CBAN updates the location of its activity bump based on inputs that encode the changes in the continuous variable (e.g., movement velocity in the case of spatial navigation)-a process akin to mathematical integration. This integration process is not perfect and accumulates error over time. For error correction, CBANs can use additional inputs providing ground-truth information about the continuous variable's correct value (e.g., visual landmarks for spatial navigation). These inputs enable the network dynamics to automatically correct any representation error. Recent experimental work on hippocampal place cells has shown that, beyond correcting errors, ground-truth inputs also fine-tune the gain of the integration process, a crucial factor that links the change in the continuous variable to the updating of the activity bump's location. However, existing CBAN models lack this plasticity, offering no insights into the neural mechanisms and representations involved in the recalibration of the integration gain. In this paper, we explore this gap by using a ring attractor network, a specific type of CBAN, to model the experimental conditions that demonstrated gain recalibration in hippocampal place cells. Our analysis reveals the necessary conditions for neural mechanisms behind gain recalibration within a CBAN. Unlike error correction, which occurs through network dynamics based on ground-truth inputs, gain recalibration requires an additional neural signal that explicitly encodes the error in the network's representation via a rate code. Finally, we propose a modified ring attractor network as an example CBAN model that verifies our theoretical findings. Combining an error-rate code with Hebbian synaptic plasticity, this model achieves recalibration of integration gain in a CBAN, ensuring accurate representation for continuous variables.
连续变量的表征对于创建外部世界的内部模型至关重要。大脑如何维持这些表征的一种主流模型是由连续脉冲吸引子网络(CBANs)给出的,它在不同区域的广泛脑功能中发挥作用,比如海马体/内嗅皮层回路中的空间导航以及前额叶皮层中的工作记忆。通过循环连接,CBAN维持一个持续的活动脉冲,其峰值位置可沿神经空间变化,对应于连续变量的不同值。为了追踪随时间变化的连续变量的值,CBAN根据编码连续变量变化的输入(例如空间导航中的运动速度)来更新其活动脉冲的位置——这一过程类似于数学积分。这种积分过程并不完美,会随时间累积误差。为了进行误差校正,CBAN可以使用提供关于连续变量正确值的真实信息的额外输入(例如空间导航中的视觉地标)。这些输入使网络动力学能够自动校正任何表征误差。最近关于海马体位置细胞的实验工作表明,除了校正误差之外,真实输入还会微调积分过程的增益,这是将连续变量的变化与活动脉冲位置更新联系起来的关键因素。然而,现有的CBAN模型缺乏这种可塑性,无法深入了解积分增益重新校准所涉及的神经机制和表征。在本文中,我们通过使用环形吸引子网络(一种特定类型的CBAN)来模拟在海马体位置细胞中展示增益重新校准的实验条件,从而探索这一差距。我们的分析揭示了CBAN内增益重新校准背后神经机制的必要条件。与通过基于真实输入的网络动力学进行的误差校正不同,增益重新校准需要一个额外的神经信号,该信号通过速率编码明确编码网络表征中的误差。最后,我们提出一个经过修改的环形吸引子网络作为示例CBAN模型,以验证我们的理论发现。该模型将误差率编码与赫布突触可塑性相结合,在CBAN中实现了积分增益的重新校准,确保对连续变量的准确表征。