• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

神经尖峰用于因果推理和学习。

Neural spiking for causal inference and learning.

机构信息

Department of Bioengineering, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America.

Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America.

出版信息

PLoS Comput Biol. 2023 Apr 4;19(4):e1011005. doi: 10.1371/journal.pcbi.1011005. eCollection 2023 Apr.

DOI:10.1371/journal.pcbi.1011005
PMID:37014913
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10104331/
Abstract

When a neuron is driven beyond its threshold, it spikes. The fact that it does not communicate its continuous membrane potential is usually seen as a computational liability. Here we show that this spiking mechanism allows neurons to produce an unbiased estimate of their causal influence, and a way of approximating gradient descent-based learning. Importantly, neither activity of upstream neurons, which act as confounders, nor downstream non-linearities bias the results. We show how spiking enables neurons to solve causal estimation problems and that local plasticity can approximate gradient descent using spike discontinuity learning.

摘要

当神经元被驱动超过其阈值时,它就会产生尖峰。事实上,神经元不传递其连续膜电位通常被视为一种计算上的不利因素。在这里,我们表明这种尖峰机制允许神经元对其因果影响进行无偏估计,并提供了一种基于梯度下降的学习方法的近似方法。重要的是,上游神经元的活动(作为混杂因素)和下游非线性都不会对结果产生偏差。我们展示了尖峰如何使神经元能够解决因果估计问题,以及局部可塑性如何使用尖峰不连续学习来近似梯度下降。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8213/10104331/6cd49b6d422c/pcbi.1011005.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8213/10104331/6b48b2985f44/pcbi.1011005.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8213/10104331/c23283c29eff/pcbi.1011005.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8213/10104331/7f54f486130a/pcbi.1011005.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8213/10104331/22bc00dbb65d/pcbi.1011005.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8213/10104331/c55c5686ed87/pcbi.1011005.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8213/10104331/6cd49b6d422c/pcbi.1011005.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8213/10104331/6b48b2985f44/pcbi.1011005.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8213/10104331/c23283c29eff/pcbi.1011005.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8213/10104331/7f54f486130a/pcbi.1011005.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8213/10104331/22bc00dbb65d/pcbi.1011005.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8213/10104331/c55c5686ed87/pcbi.1011005.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8213/10104331/6cd49b6d422c/pcbi.1011005.g006.jpg

相似文献

1
Neural spiking for causal inference and learning.神经尖峰用于因果推理和学习。
PLoS Comput Biol. 2023 Apr 4;19(4):e1011005. doi: 10.1371/journal.pcbi.1011005. eCollection 2023 Apr.
2
A supervised multi-spike learning algorithm based on gradient descent for spiking neural networks.基于梯度下降的监督多尖峰学习算法在尖峰神经网络中的应用。
Neural Netw. 2013 Jul;43:99-113. doi: 10.1016/j.neunet.2013.02.003. Epub 2013 Feb 16.
3
An online supervised learning method based on gradient descent for spiking neurons.一种基于梯度下降的用于脉冲神经元的在线监督学习方法。
Neural Netw. 2017 Sep;93:7-20. doi: 10.1016/j.neunet.2017.04.010. Epub 2017 Apr 27.
4
Digital spiking neuron and its learning for approximation of various spike-trains.数字脉冲神经元及其对各种脉冲序列逼近的学习。
Neural Netw. 2008 Mar-Apr;21(2-3):140-9. doi: 10.1016/j.neunet.2007.12.045. Epub 2008 Jan 5.
5
Investigating the computational power of spiking neurons with non-standard behaviors.研究具有非标准行为的尖峰神经元的计算能力。
Neural Netw. 2013 Jul;43:41-54. doi: 10.1016/j.neunet.2013.01.011. Epub 2013 Feb 9.
6
A Highly Effective and Robust Membrane Potential-Driven Supervised Learning Method for Spiking Neurons.一种高效稳健的基于膜电位的尖峰神经元监督学习方法。
IEEE Trans Neural Netw Learn Syst. 2019 Jan;30(1):123-137. doi: 10.1109/TNNLS.2018.2833077. Epub 2018 May 28.
7
Learning Precise Spike Train-to-Spike Train Transformations in Multilayer Feedforward Neuronal Networks.在多层前馈神经网络中学习精确的尖峰脉冲序列到尖峰脉冲序列转换。
Neural Comput. 2016 May;28(5):826-48. doi: 10.1162/NECO_a_00829. Epub 2016 Mar 4.
8
Bayesian spiking neurons II: learning.贝叶斯脉冲神经元II:学习
Neural Comput. 2008 Jan;20(1):118-45. doi: 10.1162/neco.2008.20.1.118.
9
Mathematical formulations of Hebbian learning.赫布学习的数学公式。
Biol Cybern. 2002 Dec;87(5-6):404-15. doi: 10.1007/s00422-002-0353-y.
10
Introduction to spiking neural networks: Information processing, learning and applications.脉冲神经网络简介:信息处理、学习与应用
Acta Neurobiol Exp (Wars). 2011;71(4):409-33. doi: 10.55782/ane-2011-1862.

引用本文的文献

1
A role for cortical interneurons as adversarial discriminators.皮层中间神经元作为对抗性鉴别器的作用。
PLoS Comput Biol. 2023 Sep 28;19(9):e1011484. doi: 10.1371/journal.pcbi.1011484. eCollection 2023 Sep.
2
Volitional Generation of Reproducible, Efficient Temporal Patterns.可重复、高效时间模式的意志性生成。
Brain Sci. 2022 Sep 20;12(10):1269. doi: 10.3390/brainsci12101269.
3
Bayesian model averaging for nonparametric discontinuity design.贝叶斯模型平均的非参数间断设计。

本文引用的文献

1
The molecular memory code and synaptic plasticity: A synthesis.分子记忆码与突触可塑性:综合论述
Biosystems. 2023 Feb;224:104825. doi: 10.1016/j.biosystems.2022.104825. Epub 2023 Jan 4.
2
Novelty is not surprise: Human exploratory and adaptive behavior in sequential decision-making.新颖性不是惊喜:人类在序列决策中的探索和适应行为。
PLoS Comput Biol. 2021 Jun 3;17(6):e1009070. doi: 10.1371/journal.pcbi.1009070. eCollection 2021 Jun.
3
Learning in Volatile Environments With the Bayes Factor Surprise.贝叶斯因子惊喜在多变环境中的学习
PLoS One. 2022 Jun 30;17(6):e0270310. doi: 10.1371/journal.pone.0270310. eCollection 2022.
Neural Comput. 2021 Feb;33(2):269-340. doi: 10.1162/neco_a_01352. Epub 2021 Jan 5.
4
Reconciling emergences: An information-theoretic approach to identify causal emergence in multivariate data.协调涌现:一种用于在多元数据中识别因果涌现的信息论方法。
PLoS Comput Biol. 2020 Dec 21;16(12):e1008289. doi: 10.1371/journal.pcbi.1008289. eCollection 2020 Dec.
5
A solution to the learning dilemma for recurrent networks of spiking neurons.用于尖峰神经元递归网络的学习困境的解决方案。
Nat Commun. 2020 Jul 17;11(1):3625. doi: 10.1038/s41467-020-17236-y.
6
Correlated states in balanced neuronal networks.平衡神经元网络中的相关状态。
Phys Rev E. 2019 May;99(5-1):052414. doi: 10.1103/PhysRevE.99.052414.
7
Deep learning in spiking neural networks.深度学习在尖峰神经网络中的应用。
Neural Netw. 2019 Mar;111:47-63. doi: 10.1016/j.neunet.2018.12.002. Epub 2018 Dec 18.
8
Cerebellar learning using perturbations.小脑使用微扰进行学习。
Elife. 2018 Nov 12;7:e31599. doi: 10.7554/eLife.31599.
9
Deep Learning With Spiking Neurons: Opportunities and Challenges.基于脉冲神经元的深度学习:机遇与挑战。
Front Neurosci. 2018 Oct 25;12:774. doi: 10.3389/fnins.2018.00774. eCollection 2018.
10
SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks.超级脉冲:多层脉冲神经网络中的监督学习
Neural Comput. 2018 Jun;30(6):1514-1541. doi: 10.1162/neco_a_01086. Epub 2018 Apr 13.