• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

贝叶斯系统辨识中的概率神经网络传递函数估计。

Probabilistic neural transfer function estimation with Bayesian system identification.

机构信息

Department of Computer Science, Saarland University, Saarbrücken, Germany.

Institute for Ophthalmic Research and Centre for Integrative Neuroscience (CIN), Tübingen University, Tübingen, Germany.

出版信息

PLoS Comput Biol. 2024 Jul 31;20(7):e1012354. doi: 10.1371/journal.pcbi.1012354. eCollection 2024 Jul.

DOI:10.1371/journal.pcbi.1012354
PMID:39083559
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11318871/
Abstract

Neural population responses in sensory systems are driven by external physical stimuli. This stimulus-response relationship is typically characterized by receptive fields, which have been estimated by neural system identification approaches. Such models usually require a large amount of training data, yet, the recording time for animal experiments is limited, giving rise to epistemic uncertainty for the learned neural transfer functions. While deep neural network models have demonstrated excellent power on neural prediction, they usually do not provide the uncertainty of the resulting neural representations and derived statistics, such as most exciting inputs (MEIs), from in silico experiments. Here, we present a Bayesian system identification approach to predict neural responses to visual stimuli, and explore whether explicitly modeling network weight variability can be beneficial for identifying neural response properties. To this end, we use variational inference to estimate the posterior distribution of each model weight given the training data. Tests with different neural datasets demonstrate that this method can achieve higher or comparable performance on neural prediction, with a much higher data efficiency compared to Monte Carlo dropout methods and traditional models using point estimates of the model parameters. At the same time, our variational method provides us with an effectively infinite ensemble, avoiding the idiosyncrasy of any single model, to generate MEIs. This allows us to estimate the uncertainty of stimulus-response function, which we have found to be negatively correlated with the predictive performance at model level and may serve to evaluate models. Furthermore, our approach enables us to identify response properties with credible intervals and to determine whether the inferred features are meaningful by performing statistical tests on MEIs. Finally, in silico experiments show that our model generates stimuli driving neuronal activity significantly better than traditional models in the limited-data regime.

摘要

感觉系统中的神经群体响应是由外部物理刺激驱动的。这种刺激-反应关系通常以感受野为特征,感受野是通过神经系统识别方法来估计的。这些模型通常需要大量的训练数据,然而,动物实验的记录时间是有限的,这导致了所学习的神经传递函数的认知不确定性。虽然深度神经网络模型在神经预测方面表现出了优异的能力,但它们通常不提供从计算机实验中得出的神经表示和衍生统计量(如最令人兴奋的输入(MEIs))的不确定性。在这里,我们提出了一种贝叶斯系统识别方法来预测视觉刺激的神经反应,并探讨了明确建模网络权重变异性是否有助于识别神经反应特性。为此,我们使用变分推理来估计给定训练数据的每个模型权重的后验分布。使用不同的神经数据集进行的测试表明,与蒙特卡罗随机失活方法和使用模型参数点估计的传统模型相比,该方法在神经预测方面可以实现更高或相当的性能,同时具有更高的数据效率。同时,我们的变分方法为我们提供了一个有效的无限集合,避免了任何单个模型的特殊性,从而生成 MEIs。这使我们能够估计刺激-反应函数的不确定性,我们发现该不确定性与模型层面的预测性能呈负相关,并且可能有助于评估模型。此外,我们的方法使我们能够通过对 MEIs 进行统计检验来确定推断特征是否有意义,并确定响应特征是否具有可信度区间。最后,在计算机实验中,我们的模型在有限数据条件下生成的刺激比传统模型更能有效地驱动神经元活动。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4f68/11318871/20443791663d/pcbi.1012354.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4f68/11318871/5cea446dfe54/pcbi.1012354.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4f68/11318871/1233ad8c51c8/pcbi.1012354.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4f68/11318871/ef7f4e4788f9/pcbi.1012354.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4f68/11318871/2b1f713a446e/pcbi.1012354.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4f68/11318871/91ec6bbd19ee/pcbi.1012354.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4f68/11318871/20443791663d/pcbi.1012354.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4f68/11318871/5cea446dfe54/pcbi.1012354.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4f68/11318871/1233ad8c51c8/pcbi.1012354.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4f68/11318871/ef7f4e4788f9/pcbi.1012354.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4f68/11318871/2b1f713a446e/pcbi.1012354.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4f68/11318871/91ec6bbd19ee/pcbi.1012354.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4f68/11318871/20443791663d/pcbi.1012354.g006.jpg

相似文献

1
Probabilistic neural transfer function estimation with Bayesian system identification.贝叶斯系统辨识中的概率神经网络传递函数估计。
PLoS Comput Biol. 2024 Jul 31;20(7):e1012354. doi: 10.1371/journal.pcbi.1012354. eCollection 2024 Jul.
2
Bayesian polynomial neural networks and polynomial neural ordinary differential equations.贝叶斯多项式神经网络和多项式神经常微分方程。
PLoS Comput Biol. 2024 Oct 10;20(10):e1012414. doi: 10.1371/journal.pcbi.1012414. eCollection 2024 Oct.
3
Receptive field inference with localized priors.基于局部先验的感受野推断。
PLoS Comput Biol. 2011 Oct;7(10):e1002219. doi: 10.1371/journal.pcbi.1002219. Epub 2011 Oct 27.
4
Training deep neural density estimators to identify mechanistic models of neural dynamics.训练深度神经网络密度估计器以识别神经动力学的机制模型。
Elife. 2020 Sep 17;9:e56261. doi: 10.7554/eLife.56261.
5
Uncertainty propagation for dropout-based Bayesian neural networks.基于 dropout 的贝叶斯神经网络的不确定性传播。
Neural Netw. 2021 Dec;144:394-406. doi: 10.1016/j.neunet.2021.09.005. Epub 2021 Sep 9.
6
Multisensory Bayesian Inference Depends on Synapse Maturation during Training: Theoretical Analysis and Neural Modeling Implementation.多感官贝叶斯推理在训练期间依赖于突触成熟:理论分析与神经建模实现
Neural Comput. 2017 Mar;29(3):735-782. doi: 10.1162/NECO_a_00935. Epub 2017 Jan 17.
7
Likelihood approximation networks (LANs) for fast inference of simulation models in cognitive neuroscience.用于认知神经科学中模拟模型快速推断的似然逼近网络 (LANs)。
Elife. 2021 Apr 6;10:e65074. doi: 10.7554/eLife.65074.
8
Neuron's eye view: Inferring features of complex stimuli from neural responses.神经元的视角:从神经反应推断复杂刺激的特征。
PLoS Comput Biol. 2017 Aug 21;13(8):e1005645. doi: 10.1371/journal.pcbi.1005645. eCollection 2017 Aug.
9
A Bayesian convolutional neural network-based generalized linear model.基于贝叶斯卷积神经网络的广义线性模型。
Biometrics. 2024 Mar 27;80(2). doi: 10.1093/biomtc/ujae057.
10
Bayesian inference of structured latent spaces from neural population activity with the orthogonal stochastic linear mixing model.基于正交随机线性混合模型从神经群体活动中贝叶斯推断结构潜在空间。
PLoS Comput Biol. 2024 Apr 26;20(4):e1011975. doi: 10.1371/journal.pcbi.1011975. eCollection 2024 Apr.

本文引用的文献

1
Scalable Gaussian process inference of neural responses to natural images.可扩展的自然图像神经响应高斯过程推断。
Proc Natl Acad Sci U S A. 2023 Aug 22;120(34):e2301150120. doi: 10.1073/pnas.2301150120. Epub 2023 Aug 14.
2
Efficient coding of natural scenes improves neural system identification.自然场景的高效编码能改善神经系统辨识。
PLoS Comput Biol. 2023 Apr 24;19(4):e1011037. doi: 10.1371/journal.pcbi.1011037. eCollection 2023 Apr.
3
Natural environment statistics in the upper and lower visual field are reflected in mouse retinal specializations.
上、下视野中的自然环境统计信息反映在小鼠视网膜特化中。
Curr Biol. 2021 Aug 9;31(15):3233-3247.e6. doi: 10.1016/j.cub.2021.05.017. Epub 2021 Jun 8.
4
The Geometry of Information Coding in Correlated Neural Populations.相关神经元群体中的信息编码的几何学。
Annu Rev Neurosci. 2021 Jul 8;44:403-424. doi: 10.1146/annurev-neuro-120320-082744. Epub 2021 Apr 16.
5
Nonlinear Spatial Integration Underlies the Diversity of Retinal Ganglion Cell Responses to Natural Images.非线性空间整合是视网膜神经节细胞对自然图像产生多样性反应的基础。
J Neurosci. 2021 Apr 14;41(15):3479-3498. doi: 10.1523/JNEUROSCI.3075-20.2021. Epub 2021 Mar 4.
6
If deep learning is the answer, what is the question?如果深度学习是答案,那么问题是什么?
Nat Rev Neurosci. 2021 Jan;22(1):55-67. doi: 10.1038/s41583-020-00395-8. Epub 2020 Nov 16.
7
Correlated Parameters to Accurately Measure Uncertainty in Deep Neural Networks.用于准确测量深度神经网络不确定性的相关参数。
IEEE Trans Neural Netw Learn Syst. 2021 Mar;32(3):1037-1051. doi: 10.1109/TNNLS.2020.2980004. Epub 2021 Mar 1.
8
Inception loops discover what excites neurons most using deep predictive models.Inception 循环使用深度预测模型发现最能激发神经元的事物。
Nat Neurosci. 2019 Dec;22(12):2060-2065. doi: 10.1038/s41593-019-0517-x. Epub 2019 Nov 4.
9
A deep learning framework for neuroscience.深度学习在神经科学中的应用框架。
Nat Neurosci. 2019 Nov;22(11):1761-1770. doi: 10.1038/s41593-019-0520-2. Epub 2019 Oct 28.
10
Evolving Images for Visual Neurons Using a Deep Generative Network Reveals Coding Principles and Neuronal Preferences.利用深度生成网络为视觉神经元生成演变图像,揭示编码原理和神经元偏好。
Cell. 2019 May 2;177(4):999-1009.e10. doi: 10.1016/j.cell.2019.04.005.