• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

随机突触助力高效受脑启发的学习机器。

Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines.

作者信息

Neftci Emre O, Pedroni Bruno U, Joshi Siddharth, Al-Shedivat Maruan, Cauwenberghs Gert

机构信息

Department of Cognitive Sciences, University of California, Irvine Irvine, CA, USA.

Department of Bioengineering, University of California San Diego, La Jolla, CA, USA.

出版信息

Front Neurosci. 2016 Jun 29;10:241. doi: 10.3389/fnins.2016.00241. eCollection 2016.

DOI:10.3389/fnins.2016.00241
PMID:27445650
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC4925698/
Abstract

Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware.

摘要

最近的研究表明,突触不可靠性是诱导在皮层中观察到的随机性的一种强大且充分的机制。在此,我们引入突触采样机器(S2M),这是一类神经网络模型,其将突触随机性用作蒙特卡罗采样和无监督学习的一种手段。与玻尔兹曼机的原始形式类似,这些模型可被视为霍普菲尔德网络的随机对应物,但随机性是由连接上的随机掩码诱导产生的。突触随机性在采样中起到高效机制的双重作用,并且在学习过程中类似于随机失连(DropConnect)起到正则化作用。一种实现事件驱动形式对比散度的局部突触可塑性规则能够以在线方式学习生成模型。S2M使用离散时间人工单元(如在霍普菲尔德网络中)或连续时间泄漏积分发放神经元时表现同样良好。所学习到的表示非常稀疏,并且对于比特精度降低和突触修剪具有鲁棒性:去除超过75%的最弱连接,然后进行粗略的重新学习,在基准分类任务上导致的性能损失可忽略不计。基于脉冲神经元的S2M优于现有的基于脉冲的无监督学习器,同时在功率和复杂性方面可能具有显著优势,因此是用于受脑启发硬件中的在线学习的有前景的模型。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da29/4925698/c77a896cba21/fnins-10-00241-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da29/4925698/4535428146f4/fnins-10-00241-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da29/4925698/139560494beb/fnins-10-00241-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da29/4925698/cbd7d2d6ed61/fnins-10-00241-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da29/4925698/dc44c9b6fbf3/fnins-10-00241-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da29/4925698/3c5ebeddc8f6/fnins-10-00241-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da29/4925698/83f546fa8ba3/fnins-10-00241-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da29/4925698/619a3165c43d/fnins-10-00241-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da29/4925698/f7453d69f30e/fnins-10-00241-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da29/4925698/7da64a79646a/fnins-10-00241-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da29/4925698/c77a896cba21/fnins-10-00241-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da29/4925698/4535428146f4/fnins-10-00241-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da29/4925698/139560494beb/fnins-10-00241-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da29/4925698/cbd7d2d6ed61/fnins-10-00241-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da29/4925698/dc44c9b6fbf3/fnins-10-00241-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da29/4925698/3c5ebeddc8f6/fnins-10-00241-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da29/4925698/83f546fa8ba3/fnins-10-00241-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da29/4925698/619a3165c43d/fnins-10-00241-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da29/4925698/f7453d69f30e/fnins-10-00241-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da29/4925698/7da64a79646a/fnins-10-00241-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/da29/4925698/c77a896cba21/fnins-10-00241-g0010.jpg

相似文献

1
Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines.随机突触助力高效受脑启发的学习机器。
Front Neurosci. 2016 Jun 29;10:241. doi: 10.3389/fnins.2016.00241. eCollection 2016.
2
Event-driven contrastive divergence for spiking neuromorphic systems.基于事件驱动的尖峰神经元系统对比散度算法。
Front Neurosci. 2014 Jan 30;7:272. doi: 10.3389/fnins.2013.00272. eCollection 2013.
3
Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines.事件驱动的随机反向传播:助力神经形态深度学习机器
Front Neurosci. 2017 Jun 21;11:324. doi: 10.3389/fnins.2017.00324. eCollection 2017.
4
Synaptic dynamics: linear model and adaptation algorithm.突触动力学:线性模型与自适应算法。
Neural Netw. 2014 Aug;56:49-68. doi: 10.1016/j.neunet.2014.04.001. Epub 2014 Apr 28.
5
Reinforcement Learning in Spiking Neural Networks with Stochastic and Deterministic Synapses.具有随机和确定性突触的尖峰神经网络中的强化学习。
Neural Comput. 2019 Dec;31(12):2368-2389. doi: 10.1162/neco_a_01238. Epub 2019 Oct 15.
6
Boost event-driven tactile learning with location spiking neurons.利用位置发放神经元增强事件驱动的触觉学习。
Front Neurosci. 2023 Apr 21;17:1127537. doi: 10.3389/fnins.2023.1127537. eCollection 2023.
7
An unsupervised STDP-based spiking neural network inspired by biologically plausible learning rules and connections.一种基于无监督 STDP 的尖峰神经网络,灵感来自于具有生物学合理性的学习规则和连接。
Neural Netw. 2023 Aug;165:799-808. doi: 10.1016/j.neunet.2023.06.019. Epub 2023 Jun 22.
8
Analog Memristive Synapse in Spiking Networks Implementing Unsupervised Learning.基于脉冲神经网络的模拟忆阻突触实现无监督学习
Front Neurosci. 2016 Oct 25;10:482. doi: 10.3389/fnins.2016.00482. eCollection 2016.
9
Neural sampling machine with stochastic synapse allows brain-like learning and inference.具有随机突触的神经采样机可实现类似大脑的学习和推理。
Nat Commun. 2022 May 11;13(1):2571. doi: 10.1038/s41467-022-30305-8.
10
An Adaptive STDP Learning Rule for Neuromorphic Systems.一种用于神经形态系统的自适应STDP学习规则。
Front Neurosci. 2021 Sep 24;15:741116. doi: 10.3389/fnins.2021.741116. eCollection 2021.

引用本文的文献

1
Synapses learn to utilize stochastic pre-synaptic release for the prediction of postsynaptic dynamics.突触学会利用随机的前突触释放来预测后突触动力学。
PLoS Comput Biol. 2024 Nov 4;20(11):e1012531. doi: 10.1371/journal.pcbi.1012531. eCollection 2024 Nov.
2
Synchronization of Complex Dynamical Networks with Stochastic Links Dynamics.具有随机链路动力学的复杂动态网络同步
Entropy (Basel). 2023 Oct 17;25(10):1457. doi: 10.3390/e25101457.
3
Sharing leaky-integrate-and-fire neurons for memory-efficient spiking neural networks.

本文引用的文献

1
Neuromorphic Hardware Architecture Using the Neural Engineering Framework for Pattern Recognition.使用神经工程框架进行模式识别的神经形态硬件架构
IEEE Trans Biomed Circuits Syst. 2017 Jun;11(3):574-584. doi: 10.1109/TBCAS.2017.2666883. Epub 2017 May 19.
2
Energy-Efficient Neuromorphic Classifiers.节能神经形态分类器
Neural Comput. 2016 Oct;28(10):2011-44. doi: 10.1162/NECO_a_00882. Epub 2016 Aug 24.
3
Unsupervised learning of digit recognition using spike-timing-dependent plasticity.使用基于脉冲时间依赖可塑性的无监督数字识别学习。
用于高效内存脉冲神经网络的共享泄漏积分发放神经元
Front Neurosci. 2023 Jul 31;17:1230002. doi: 10.3389/fnins.2023.1230002. eCollection 2023.
4
Coherent noise enables probabilistic sequence replay in spiking neuronal networks.相干噪声使尖峰神经元网络中的概率序列重放成为可能。
PLoS Comput Biol. 2023 May 2;19(5):e1010989. doi: 10.1371/journal.pcbi.1010989. eCollection 2023 May.
5
Neural sampling machine with stochastic synapse allows brain-like learning and inference.具有随机突触的神经采样机可实现类似大脑的学习和推理。
Nat Commun. 2022 May 11;13(1):2571. doi: 10.1038/s41467-022-30305-8.
6
Probabilistic Spike Propagation for Efficient Hardware Implementation of Spiking Neural Networks.用于脉冲神经网络高效硬件实现的概率脉冲传播
Front Neurosci. 2021 Jul 15;15:694402. doi: 10.3389/fnins.2021.694402. eCollection 2021.
7
Biologically Plausible Class Discrimination Based Recurrent Neural Network Training for Motor Pattern Generation.基于生物合理性类别区分的循环神经网络训练用于运动模式生成
Front Neurosci. 2020 Aug 12;14:772. doi: 10.3389/fnins.2020.00772. eCollection 2020.
8
Effects of synaptic integration on the dynamics and computational performance of spiking neural network.突触整合对脉冲神经网络动力学和计算性能的影响。
Cogn Neurodyn. 2020 Jun;14(3):347-357. doi: 10.1007/s11571-020-09572-y. Epub 2020 Feb 19.
9
Deterministic networks for probabilistic computing.确定性网络用于概率计算。
Sci Rep. 2019 Dec 4;9(1):18303. doi: 10.1038/s41598-019-54137-7.
10
ConvPath: A software tool for lung adenocarcinoma digital pathological image analysis aided by a convolutional neural network.ConvPath:一种使用卷积神经网络辅助肺腺癌数字病理图像分析的软件工具。
EBioMedicine. 2019 Dec;50:103-110. doi: 10.1016/j.ebiom.2019.10.033. Epub 2019 Nov 22.
Front Comput Neurosci. 2015 Aug 3;9:99. doi: 10.3389/fncom.2015.00099. eCollection 2015.
4
Network Plasticity as Bayesian Inference.作为贝叶斯推理的网络可塑性
PLoS Comput Biol. 2015 Nov 6;11(11):e1004485. doi: 10.1371/journal.pcbi.1004485. eCollection 2015 Nov.
5
Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems.神经形态VLSI系统中视觉刺激的实时无监督学习。
Sci Rep. 2015 Oct 14;5:14730. doi: 10.1038/srep14730.
6
Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms.脉冲深度置信网络对噪声的鲁棒性以及神经启发式硬件平台的位精度降低
Front Neurosci. 2015 Jul 9;9:222. doi: 10.3389/fnins.2015.00222. eCollection 2015.
7
Plasticity in memristive devices for spiking neural networks.忆阻器在尖峰神经网络中的可塑性。
Front Neurosci. 2015 Mar 2;9:51. doi: 10.3389/fnins.2015.00051. eCollection 2015.
8
Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons.离散空间中的概率推理可以在LIF神经元网络中实现。
Front Comput Neurosci. 2015 Feb 12;9:13. doi: 10.3389/fncom.2015.00013. eCollection 2015.
9
Limits to high-speed simulations of spiking neural networks using general-purpose computers.使用通用计算机对脉冲神经网络进行高速模拟的限制。
Front Neuroinform. 2014 Sep 11;8:76. doi: 10.3389/fninf.2014.00076. eCollection 2014.
10
Artificial brains. A million spiking-neuron integrated circuit with a scalable communication network and interface.人工大脑。具有可扩展通信网络和接口的 100 万个尖峰神经元集成电路。
Science. 2014 Aug 8;345(6197):668-73. doi: 10.1126/science.1254642. Epub 2014 Aug 7.