Suppr超能文献

随机神经场作为梯度动力系统。

Stochastic neural fields as gradient dynamical systems.

机构信息

Department of Mathematics, University of Utah, Salt Lake City, Utah 84112, USA.

出版信息

Phys Rev E. 2019 Jul;100(1-1):012402. doi: 10.1103/PhysRevE.100.012402.

Abstract

Continuous attractor neural networks are used extensively to model a variety of experimentally observed coherent brain states, ranging from cortical waves of activity to stationary activity bumps. The latter are thought to play an important role in various forms of neural information processing, including population coding in primary visual cortex (V1) and working memory in prefrontal cortex. However, one limitation of continuous attractor networks is that the location of the peak of an activity bump (or wave) can diffuse due to intrinsic network noise. This reflects marginal stability of bump solutions with respect to the action of an underlying continuous symmetry group. Previous studies have used perturbation theory to derive an approximate stochastic differential equation for the location of the peak (phase) of the bump. Although this method captures the diffusive wandering of a bump solution, it ignores fluctuations in the amplitude of the bump. In this paper, we show how amplitude fluctuations can be analyzed by reducing the underlying stochastic neural field equation to a finite-dimensional stochastic gradient dynamical system that tracks the stochastic motion of both the amplitude and phase of bump solutions. This allows us to derive exact expressions for the steady-state probability density and its moments, which are then used to investigate two major issues: (i) the input-dependent suppression of neural variability and (ii) noise-induced transitions to bump extinction. We develop the theory by considering the particular example of a ring attractor network with SO(2) symmetry, which is the most common architecture used in attractor models of working memory and population tuning in V1. However, we also extend the analysis to a higher-dimensional spherical attractor network with SO(3) symmetry which has previously been proposed as a model of orientation and spatial frequency tuning in V1. We thus establish how a combination of stochastic analysis and group theoretic methods provides a powerful tool for investigating the effects of noise in continuous attractor networks.

摘要

连续吸引子神经网络被广泛用于模拟各种实验观察到的相干脑状态,从皮层活动波到稳定的活动峰。后者被认为在各种形式的神经信息处理中发挥重要作用,包括初级视觉皮层 (V1) 中的群体编码和前额叶皮层中的工作记忆。然而,连续吸引子网络的一个限制是由于内在网络噪声,活动峰(或波)的位置会扩散。这反映了凸起解相对于基础连续对称群的作用的边缘稳定性。以前的研究使用微扰理论推导出了用于凸起峰值(相位)位置的近似随机微分方程。尽管这种方法捕捉到了凸起解的扩散性游走,但它忽略了凸起幅度的波动。在本文中,我们展示了如何通过将基础随机神经场方程简化为一个跟踪凸起解的幅度和相位的随机运动的有限维随机梯度动力系统来分析幅度波动。这使我们能够推导出稳态概率密度及其矩的精确表达式,然后利用这些表达式来研究两个主要问题:(i)输入依赖性抑制神经变异性,以及(ii)噪声诱导的凸起灭绝转换。我们通过考虑具有 SO(2) 对称性的环形吸引子网络的特定示例来发展该理论,这是工作记忆和 V1 群体调谐吸引子模型中最常用的架构。然而,我们还将分析扩展到具有 SO(3) 对称性的更高维球形吸引子网络,该网络以前被提出作为 V1 中的方向和空间频率调谐的模型。因此,我们确定了随机分析和群论方法的组合如何为研究连续吸引子网络中的噪声效应提供强大的工具。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验