Suppr超能文献

用于高维神经尖峰序列中序列检测的点过程模型。

Point process models for sequence detection in high-dimensional neural spike trains.

作者信息

Williams Alex H, Degleris Anthony, Wang Yixin, Linderman Scott W

机构信息

Department of Statistics, Stanford University, Stanford, CA 94305.

Department of Electrical Engineering, Stanford University, Stanford, CA 94305.

出版信息

Adv Neural Inf Process Syst. 2020 Dec;33:14350-14361.

Abstract

Sparse sequences of neural spikes are posited to underlie aspects of working memory [1], motor production [2], and learning [3, 4]. Discovering these sequences in an unsupervised manner is a longstanding problem in statistical neuroscience [5-7]. Promising recent work [4, 8] utilized a convolutive nonnegative matrix factorization model [9] to tackle this challenge. However, this model requires spike times to be discretized, utilizes a sub-optimal least-squares criterion, and does not provide uncertainty estimates for model predictions or estimated parameters. We address each of these shortcomings by developing a point process model that characterizes fine-scale sequences at the level of individual spikes and represents sequence occurrences as a small number of marked events in continuous time. This ultra-sparse representation of sequence events opens new possibilities for spike train modeling. For example, we introduce learnable time warping parameters to model sequences of varying duration, which have been experimentally observed in neural circuits [10]. We demonstrate these advantages on experimental recordings from songbird higher vocal center and rodent hippocampus.

摘要

稀疏的神经尖峰序列被认为是工作记忆[1]、运动产生[2]和学习[3,4]等方面的基础。以无监督方式发现这些序列是统计神经科学中的一个长期问题[5-7]。最近有前景的工作[4,8]利用卷积非负矩阵分解模型[9]来应对这一挑战。然而,该模型要求尖峰时间离散化,使用次优的最小二乘准则,并且不提供模型预测或估计参数的不确定性估计。我们通过开发一个点过程模型来解决这些缺点,该模型在单个尖峰水平上表征精细尺度的序列,并将序列出现表示为连续时间中的少量标记事件。这种序列事件的超稀疏表示为尖峰序列建模开辟了新的可能性。例如,我们引入可学习的时间扭曲参数来对持续时间不同的序列进行建模,这在神经回路中已通过实验观察到[10]。我们在鸣禽高级发声中枢和啮齿动物海马体的实验记录上展示了这些优势。

相似文献

4
Estimating summary statistics in the spike-train space.在脉冲序列空间中估计汇总统计量。
J Comput Neurosci. 2013 Jun;34(3):391-410. doi: 10.1007/s10827-012-0427-3. Epub 2012 Oct 5.
7
Learning probabilistic neural representations with randomly connected circuits.用随机连接的电路学习概率神经网络表示。
Proc Natl Acad Sci U S A. 2020 Oct 6;117(40):25066-25073. doi: 10.1073/pnas.1912804117. Epub 2020 Sep 18.

引用本文的文献

2
Interpretable deep learning for deconvolutional analysis of neural signals.用于神经信号反卷积分析的可解释深度学习
Neuron. 2025 Apr 16;113(8):1151-1168.e13. doi: 10.1016/j.neuron.2025.02.006. Epub 2025 Mar 12.
7
How our understanding of memory replay evolves.记忆回放的理解是如何发展的。
J Neurophysiol. 2023 Mar 1;129(3):552-580. doi: 10.1152/jn.00454.2022. Epub 2023 Feb 8.
9
Neural ensembles in navigation: From single cells to population codes.导航中的神经集合:从单个细胞到群体编码。
Curr Opin Neurobiol. 2023 Feb;78:102665. doi: 10.1016/j.conb.2022.102665. Epub 2022 Dec 19.

本文引用的文献

1
On the methods for reactivation and replay analysis.关于再激活和重放分析的方法。
Philos Trans R Soc Lond B Biol Sci. 2020 May 25;375(1799):20190231. doi: 10.1098/rstb.2019.0231. Epub 2020 Apr 6.
3
High-dimensional geometry of population responses in visual cortex.群体视觉皮层反应的高维几何结构。
Nature. 2019 Jul;571(7765):361-365. doi: 10.1038/s41586-019-1346-5. Epub 2019 Jun 26.
4
Sequential Neural Activity in Primary Motor Cortex during Sleep.原发性运动皮层在睡眠中的序列神经活动。
J Neurosci. 2019 May 8;39(19):3698-3712. doi: 10.1523/JNEUROSCI.1408-18.2019. Epub 2019 Mar 6.
7
Space and Time: The Hippocampus as a Sequence Generator.时空:海马体作为序列生成器。
Trends Cogn Sci. 2018 Oct;22(10):853-869. doi: 10.1016/j.tics.2018.07.006.
8
Mixture models with a prior on the number of components.对组件数量具有先验的混合模型。
J Am Stat Assoc. 2018;113(521):340-356. doi: 10.1080/01621459.2016.1255636. Epub 2017 Nov 13.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验