Suppr超能文献

一种用于分段线性递归神经网络的状态空间方法,用于从神经测量中识别计算动力学。

A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements.

作者信息

Durstewitz Daniel

机构信息

Dept. of Theoretical Neuroscience, Bernstein Center for Computational Neuroscience Heidelberg-Mannheim, Central Institute of Mental Health, Medical Faculty Mannheim/ Heidelberg University, Mannheim, Germany.

出版信息

PLoS Comput Biol. 2017 Jun 2;13(6):e1005542. doi: 10.1371/journal.pcbi.1005542. eCollection 2017 Jun.

Abstract

The computational and cognitive properties of neural systems are often thought to be implemented in terms of their (stochastic) network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a (lower-dimensional) state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC) obtained during performance of a classical working memory task, delayed alternation. Models estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation framework for PLRNNs that may enable to recover relevant aspects of the nonlinear dynamics underlying observed neuronal time series, and directly link these to computational properties.

摘要

神经系统的计算和认知特性通常被认为是通过其(随机)网络动力学来实现的。因此,从实验观察到的神经元时间序列(如多个单单元记录或神经成像数据)中恢复系统动力学,是迈向理解其计算过程的重要一步。理想情况下,人们不仅会寻求动力学的(低维)状态空间表示,还希望能够获取其统计特性及其生成方程以进行深入分析。递归神经网络(RNNs)是一个计算能力强大且动态通用的形式框架,已从计算和动力系统的角度进行了广泛研究。在此,我们在状态空间模型的统计框架内,为分段线性递归神经网络(PLRNNs)开发了一种半解析最大似然估计方案,该方案考虑了潜在的隐藏动力学和观测过程中的噪声。期望最大化算法用于通过全局拉普拉斯近似迭代地推断隐藏状态分布和PLRNN参数。在对简单示例验证该过程并使用粒子滤波器进行推理以作比较之后,该方法被应用于在执行经典工作记忆任务——延迟交替期间从啮齿动物前扣带皮层(ACC)获得的多个单单元记录。从核平滑尖峰时间数据估计的模型能够捕捉任务表现背后的基本计算动力学,包括刺激选择性延迟活动。然而,估计出的模型很少是多稳态的,而是被调整为在分岔点附近表现出缓慢的动力学。总之,本研究提出了一种用于PLRNNs的半解析(因此相当快速)最大似然估计框架,该框架可能有助于恢复观察到的神经元时间序列背后非线性动力学的相关方面,并将这些直接与计算特性联系起来。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3fc4/5456035/95a1fdd0d7cb/pcbi.1005542.g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验