• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

具有树突的神经元可以使用低分辨率的突触权重执行线性可分计算。

Neurons with dendrites can perform linearly separable computations with low resolution synaptic weights.

机构信息

IEMN, CNRS UMR 8520, Villeneuve d'asq, 59650, France.

Institut de la vision, CNRS, INSERM, Paris, 75012, France.

出版信息

F1000Res. 2020 Sep 28;9:1174. doi: 10.12688/f1000research.26486.3. eCollection 2020.

DOI:10.12688/f1000research.26486.3
PMID:33564396
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7848858/
Abstract

In theory, neurons modelled as single layer perceptrons can implement all linearly separable computations. In practice, however, these computations may require arbitrarily precise synaptic weights. This is a strong constraint since both biological neurons and their artificial counterparts have to cope with limited precision. Here, we explore how non-linear processing in dendrites helps overcome this constraint. We start by finding a class of computations which requires increasing precision with the number of inputs in a perceptron and show that it can be implemented without this constraint in a neuron with sub-linear dendritic subunits. Then, we complement this analytical study by a simulation of a biophysical neuron model with two passive dendrites and a soma, and show that it can implement this computation. This work demonstrates a new role of dendrites in neural computation: by distributing the computation across independent subunits, the same computation can be performed more efficiently with less precise tuning of the synaptic weights. This work not only offers new insight into the importance of dendrites for biological neurons, but also paves the way for new, more efficient architectures of artificial neuromorphic chips.

摘要

从理论上讲,模拟为单层感知器的神经元可以实现所有可线性分离的计算。然而,在实践中,这些计算可能需要任意精确的突触权重。这是一个很强的约束,因为生物神经元及其人工对应物都必须应对有限的精度。在这里,我们探讨了树突中的非线性处理如何帮助克服这一限制。我们首先找到一类计算,这些计算需要随着感知器输入数量的增加而提高精度,并表明在具有次线性树突亚基的神经元中,可以在没有这种约束的情况下实现这些计算。然后,我们通过模拟具有两个被动树突和一个胞体的生物物理神经元模型来补充这个分析研究,并表明它可以实现这个计算。这项工作展示了树突在神经计算中的新作用:通过将计算分布在独立的亚单位中,相同的计算可以通过更精确地调整突触权重来更有效地进行。这项工作不仅为生物神经元中树突的重要性提供了新的见解,而且为人工神经形态芯片的新的、更有效的架构铺平了道路。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30dc/8063519/f14526b29eb7/f1000research-9-55803-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30dc/8063519/4291bb96d7cd/f1000research-9-55803-g0000.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30dc/8063519/51d0cf00349b/f1000research-9-55803-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30dc/8063519/0b1ff1af2962/f1000research-9-55803-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30dc/8063519/f604ff890af6/f1000research-9-55803-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30dc/8063519/39e6eeb6337f/f1000research-9-55803-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30dc/8063519/f14526b29eb7/f1000research-9-55803-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30dc/8063519/4291bb96d7cd/f1000research-9-55803-g0000.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30dc/8063519/51d0cf00349b/f1000research-9-55803-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30dc/8063519/0b1ff1af2962/f1000research-9-55803-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30dc/8063519/f604ff890af6/f1000research-9-55803-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30dc/8063519/39e6eeb6337f/f1000research-9-55803-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30dc/8063519/f14526b29eb7/f1000research-9-55803-g0005.jpg

相似文献

1
Neurons with dendrites can perform linearly separable computations with low resolution synaptic weights.具有树突的神经元可以使用低分辨率的突触权重执行线性可分计算。
F1000Res. 2020 Sep 28;9:1174. doi: 10.12688/f1000research.26486.3. eCollection 2020.
2
All neurons can perform linearly non-separable computations.所有神经元都可以进行线性不可分计算。
F1000Res. 2021 Jul 6;10:539. doi: 10.12688/f1000research.53961.3. eCollection 2021.
3
Passive dendrites enable single neurons to compute linearly non-separable functions.被动树突使单个神经元能够计算线性不可分的函数。
PLoS Comput Biol. 2013;9(2):e1002867. doi: 10.1371/journal.pcbi.1002867. Epub 2013 Feb 28.
4
Demonstration that sublinear dendrites enable linearly non-separable computations.证明次线性树突使线性不可分计算成为可能。
Sci Rep. 2024 Aug 6;14(1):18226. doi: 10.1038/s41598-024-65866-9.
5
Dendritic computations captured by an effective point neuron model.有效点神经元模型捕获的树突计算。
Proc Natl Acad Sci U S A. 2019 Jul 23;116(30):15244-15252. doi: 10.1073/pnas.1904463116. Epub 2019 Jul 10.
6
Dendritic computation.树突状计算
Annu Rev Neurosci. 2005;28:503-32. doi: 10.1146/annurev.neuro.28.061604.135703.
7
Might a Single Neuron Solve Interesting Machine Learning Problems Through Successive Computations on Its Dendritic Tree?单个神经元能否通过在其树突上连续计算来解决有趣的机器学习问题?
Neural Comput. 2021 May 13;33(6):1554-1571. doi: 10.1162/neco_a_01390.
8
Neurons with Multiplicative Interactions of Nonlinear Synapses.具有非线性突触相乘相互作用的神经元。
Int J Neural Syst. 2019 Oct;29(8):1950012. doi: 10.1142/S0129065719500126. Epub 2019 Mar 26.
9
Introducing the Dendrify framework for incorporating dendrites to spiking neural networks.引入 Dendrify 框架,将树突整合到尖峰神经网络中。
Nat Commun. 2023 Jan 10;14(1):131. doi: 10.1038/s41467-022-35747-8.
10
Chalcogenide optomemristors for multi-factor neuromorphic computation.硫属化物光电导忆阻器用于多因素神经形态计算。
Nat Commun. 2022 Apr 26;13(1):2247. doi: 10.1038/s41467-022-29870-9.

引用本文的文献

1
SAM: A Unified Self-Adaptive Multicompartmental Spiking Neuron Model for Learning With Working Memory.SAM:一种用于工作记忆学习的统一自适应多室脉冲神经元模型。
Front Neurosci. 2022 Apr 18;16:850945. doi: 10.3389/fnins.2022.850945. eCollection 2022.
2
Dynamical Characteristics of Recurrent Neuronal Networks Are Robust Against Low Synaptic Weight Resolution.递归神经网络的动力学特性对低突触权重分辨率具有鲁棒性。
Front Neurosci. 2021 Dec 24;15:757790. doi: 10.3389/fnins.2021.757790. eCollection 2021.