• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

减少实时BCPNN学习的计算量。

Reducing the computational footprint for real-time BCPNN learning.

作者信息

Vogginger Bernhard, Schüffny René, Lansner Anders, Cederström Love, Partzsch Johannes, Höppner Sebastian

机构信息

Department of Electrical Engineering and Information Technology, Technische Universität Dresden Germany.

Department of Computational Biology, School of Computer Science and Communication, Royal Institute of Technology (KTH) Stockholm, Sweden ; Department of Numerical Analysis and Computer Science, Stockholm University Stockholm, Sweden.

出版信息

Front Neurosci. 2015 Jan 22;9:2. doi: 10.3389/fnins.2015.00002. eCollection 2015.

DOI:10.3389/fnins.2015.00002
PMID:25657618
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC4302947/
Abstract

The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware.

摘要

在神经模拟或神经形态硬件中实现突触可塑性通常资源消耗极大,常常需要在效率和灵活性之间做出妥协。贝叶斯置信传播神经网络(BCPNN)范式提供了一种通用但计算成本高昂的可塑性机制。基于贝叶斯统计,并与生物可塑性过程有明确联系,BCPNN学习规则已应用于许多领域,从数据分类、关联记忆、基于奖励的学习、概率推理到皮质吸引子记忆网络。在这种学习规则的基于脉冲的版本中,突触前、突触后和同时发生的活动在三个低通滤波阶段进行追踪,总共需要八个状态变量,其动态通常用固定步长的欧拉方法进行模拟。我们推导了解析解,允许对该学习规则进行高效的事件驱动实现。通过首先重写模型将每次更新的基本算术运算数量减少一半,以及其次对频繁计算的指数衰减使用查找表,进一步实现了加速。最终,在典型的使用案例中,使用我们的方法进行模拟比使用固定步长的欧拉方法快一个多数量级。为了使每个BCPNN突触占用较小的内存空间,我们还评估了状态变量使用定点数的情况,并评估了与传统显式欧拉方法相比实现相同或更好精度所需的位数。所有这些将允许在高性能计算中基于BCPNN对简化的皮质模型进行实时模拟。更重要的是,有了手头的解析解并且由于内存带宽的降低,该学习规则可以在专用或现有的数字神经形态硬件中高效实现。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7737/4302947/8ec3c3d05d2e/fnins-09-00002-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7737/4302947/b819d7c15a8d/fnins-09-00002-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7737/4302947/fe8186ff6b0a/fnins-09-00002-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7737/4302947/58c02c5e813d/fnins-09-00002-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7737/4302947/ca2d4a4701f8/fnins-09-00002-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7737/4302947/9f7f172d93db/fnins-09-00002-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7737/4302947/8ec3c3d05d2e/fnins-09-00002-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7737/4302947/b819d7c15a8d/fnins-09-00002-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7737/4302947/fe8186ff6b0a/fnins-09-00002-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7737/4302947/58c02c5e813d/fnins-09-00002-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7737/4302947/ca2d4a4701f8/fnins-09-00002-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7737/4302947/9f7f172d93db/fnins-09-00002-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7737/4302947/8ec3c3d05d2e/fnins-09-00002-g0006.jpg

相似文献

1
Reducing the computational footprint for real-time BCPNN learning.减少实时BCPNN学习的计算量。
Front Neurosci. 2015 Jan 22;9:2. doi: 10.3389/fnins.2015.00002. eCollection 2015.
2
Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware.基于神经形态硬件的塑性神经网络大规模模拟
Front Neuroanat. 2016 Apr 7;10:37. doi: 10.3389/fnana.2016.00037. eCollection 2016.
3
Mapping the BCPNN Learning Rule to a Memristor Model.将BCPNN学习规则映射到忆阻器模型。
Front Neurosci. 2021 Dec 9;15:750458. doi: 10.3389/fnins.2021.750458. eCollection 2021.
4
Optimizing BCPNN Learning Rule for Memory Access.优化用于内存访问的BCPNN学习规则。
Front Neurosci. 2020 Aug 31;14:878. doi: 10.3389/fnins.2020.00878. eCollection 2020.
5
A forecast-based STDP rule suitable for neuromorphic implementation.一种适用于神经形态实现的基于预测的 STDP 规则。
Neural Netw. 2012 Aug;32:3-14. doi: 10.1016/j.neunet.2012.02.018. Epub 2012 Feb 14.
6
Synapse-Centric Mapping of Cortical Models to the SpiNNaker Neuromorphic Architecture.以突触为中心的皮质模型到SpiNNaker神经形态架构的映射
Front Neurosci. 2016 Sep 14;10:420. doi: 10.3389/fnins.2016.00420. eCollection 2016.
7
Traces of semantization - from episodic to semantic memory in a spiking cortical network model.语义化的痕迹——在一个脉冲皮层网络模型中从情景记忆到语义记忆
eNeuro. 2022 Jul 8;9(4). doi: 10.1523/ENEURO.0062-22.2022.
8
Efficient Synapse Memory Structure for Reconfigurable Digital Neuromorphic Hardware.用于可重构数字神经形态硬件的高效突触存储器结构
Front Neurosci. 2018 Nov 20;12:829. doi: 10.3389/fnins.2018.00829. eCollection 2018.
9
Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines.事件驱动的随机反向传播:助力神经形态深度学习机器
Front Neurosci. 2017 Jun 21;11:324. doi: 10.3389/fnins.2017.00324. eCollection 2017.
10
Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model.用于全尺寸皮质微电路模型的数字神经形态硬件SpiNNaker与神经网络模拟软件NEST的性能比较
Front Neurosci. 2018 May 23;12:291. doi: 10.3389/fnins.2018.00291. eCollection 2018.

引用本文的文献

1
Mapping the BCPNN Learning Rule to a Memristor Model.将BCPNN学习规则映射到忆阻器模型。
Front Neurosci. 2021 Dec 9;15:750458. doi: 10.3389/fnins.2021.750458. eCollection 2021.
2
Optimizing BCPNN Learning Rule for Memory Access.优化用于内存访问的BCPNN学习规则。
Front Neurosci. 2020 Aug 31;14:878. doi: 10.3389/fnins.2020.00878. eCollection 2020.
3
Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware.基于神经形态硬件的塑性神经网络大规模模拟

本文引用的文献

1
Artificial brains. A million spiking-neuron integrated circuit with a scalable communication network and interface.人工大脑。具有可扩展通信网络和接口的 100 万个尖峰神经元集成电路。
Science. 2014 Aug 8;345(6197):668-73. doi: 10.1126/science.1254642. Epub 2014 Aug 7.
2
Memory consolidation from seconds to weeks: a three-stage neural network model with autonomous reinstatement dynamics.从秒到周的记忆巩固:具有自主恢复动态的三阶段神经网络模型。
Front Comput Neurosci. 2014 Jul 1;8:64. doi: 10.3389/fncom.2014.00064. eCollection 2014.
3
Synaptic and nonsynaptic plasticity approximating probabilistic inference.
Front Neuroanat. 2016 Apr 7;10:37. doi: 10.3389/fnana.2016.00037. eCollection 2016.
突触和非突触可塑性近似概率推理。
Front Synaptic Neurosci. 2014 Apr 8;6:8. doi: 10.3389/fnsyn.2014.00008. eCollection 2014.
4
A spiking neural network model of self-organized pattern recognition in the early mammalian olfactory system.哺乳动物早期嗅觉系统中自组织模式识别的尖峰神经网络模型。
Front Neural Circuits. 2014 Feb 7;8:5. doi: 10.3389/fncir.2014.00005. eCollection 2014.
5
A modular attractor associative memory with patchy connectivity and weight pruning.具有斑块连接和权值修剪的模块化吸引子联想记忆。
Network. 2013;24(4):129-50. doi: 10.3109/0954898X.2013.859323.
6
Finding a roadmap to achieve large neuromorphic hardware systems.寻找实现大型神经形态硬件系统的路线图。
Front Neurosci. 2013 Sep 10;7:118. doi: 10.3389/fnins.2013.00118. eCollection 2013.
7
Reactivation in working memory: an attractor network model of free recall.工作记忆中的再激活:自由回忆的吸引子网络模型。
PLoS One. 2013 Aug 30;8(8):e73776. doi: 10.1371/journal.pone.0073776. eCollection 2013.
8
Design of silicon brains in the nano-CMOS era: spiking neurons, learning synapses and neural architecture optimization.纳米 CMOS 时代的硅脑设计:尖峰神经元、学习突触和神经架构优化。
Neural Netw. 2013 Sep;45:4-26. doi: 10.1016/j.neunet.2013.05.011. Epub 2013 Jun 6.
9
Effect of prestimulus alpha power, phase, and synchronization on stimulus detection rates in a biophysical attractor network model.刺激前阿尔法功率、相位和同步对生物物理吸引子网络模型中刺激检测率的影响。
J Neurosci. 2013 Jul 17;33(29):11817-24. doi: 10.1523/JNEUROSCI.5155-12.2013.
10
Action selection performance of a reconfigurable basal ganglia inspired model with Hebbian-Bayesian Go-NoGo connectivity.具有赫布 - 贝叶斯Go - NoGo连接的可重构基底神经节启发模型的动作选择性能
Front Behav Neurosci. 2012 Oct 2;6:65. doi: 10.3389/fnbeh.2012.00065. eCollection 2012.