• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

贝叶斯持续学习脉冲神经网络。

Bayesian continual learning spiking neural networks.

作者信息

Skatchkovsky Nicolas, Jang Hyeryung, Simeone Osvaldo

机构信息

King's Communication, Learning and Information Processing (KCLIP) Lab, Department of Engineering, King's College London, London, United Kingdom.

Department of Artificial Intelligence, Dongguk University, Seoul, South Korea.

出版信息

Front Comput Neurosci. 2022 Nov 16;16:1037976. doi: 10.3389/fncom.2022.1037976. eCollection 2022.

DOI:10.3389/fncom.2022.1037976
PMID:36465962
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9708898/
Abstract

Among the main features of biological intelligence are energy efficiency, capacity for continual adaptation, and risk management uncertainty quantification. Neuromorphic engineering has been thus far mostly driven by the goal of implementing energy-efficient machines that take inspiration from the time-based computing paradigm of biological brains. In this paper, we take steps toward the design of neuromorphic systems that are capable of adaptation to changing learning tasks, while producing well-calibrated uncertainty quantification estimates. To this end, we derive online learning rules for spiking neural networks (SNNs) within a Bayesian continual learning framework. In it, each synaptic weight is represented by parameters that quantify the current epistemic uncertainty resulting from prior knowledge and observed data. The proposed online rules update the distribution parameters in a streaming fashion as data are observed. We instantiate the proposed approach for both real-valued and binary synaptic weights. Experimental results using Intel's Lava platform show the merits of Bayesian over frequentist learning in terms of capacity for adaptation and uncertainty quantification.

摘要

生物智能的主要特征包括能源效率、持续适应能力以及风险管理不确定性量化。到目前为止,神经形态工程主要受实现节能机器这一目标的驱动,这些机器借鉴了生物大脑基于时间的计算范式。在本文中,我们朝着设计能够适应不断变化的学习任务的神经形态系统迈出了步伐,同时产生校准良好的不确定性量化估计。为此,我们在贝叶斯持续学习框架内推导了脉冲神经网络(SNN)的在线学习规则。在该框架中,每个突触权重由量化先前知识和观测数据产生的当前认知不确定性的参数表示。所提出的在线规则在观测数据时以流方式更新分布参数。我们针对实值和二元突触权重实例化了所提出的方法。使用英特尔的Lava平台进行的实验结果表明,在适应能力和不确定性量化方面,贝叶斯学习优于频率主义学习。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b739/9708898/10a8a45d755a/fncom-16-1037976-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b739/9708898/07e7c7501ef4/fncom-16-1037976-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b739/9708898/f0e01dcdbdf2/fncom-16-1037976-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b739/9708898/ea1800f72087/fncom-16-1037976-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b739/9708898/105020e592f1/fncom-16-1037976-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b739/9708898/8776e4614642/fncom-16-1037976-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b739/9708898/e6f573602d9f/fncom-16-1037976-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b739/9708898/bd7b50274634/fncom-16-1037976-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b739/9708898/d9266f54ec2c/fncom-16-1037976-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b739/9708898/9a236028b89b/fncom-16-1037976-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b739/9708898/95debef9cbc4/fncom-16-1037976-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b739/9708898/8707d294a40f/fncom-16-1037976-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b739/9708898/10a8a45d755a/fncom-16-1037976-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b739/9708898/07e7c7501ef4/fncom-16-1037976-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b739/9708898/f0e01dcdbdf2/fncom-16-1037976-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b739/9708898/ea1800f72087/fncom-16-1037976-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b739/9708898/105020e592f1/fncom-16-1037976-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b739/9708898/8776e4614642/fncom-16-1037976-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b739/9708898/e6f573602d9f/fncom-16-1037976-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b739/9708898/bd7b50274634/fncom-16-1037976-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b739/9708898/d9266f54ec2c/fncom-16-1037976-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b739/9708898/9a236028b89b/fncom-16-1037976-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b739/9708898/95debef9cbc4/fncom-16-1037976-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b739/9708898/8707d294a40f/fncom-16-1037976-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b739/9708898/10a8a45d755a/fncom-16-1037976-g0012.jpg

相似文献

1
Bayesian continual learning spiking neural networks.贝叶斯持续学习脉冲神经网络。
Front Comput Neurosci. 2022 Nov 16;16:1037976. doi: 10.3389/fncom.2022.1037976. eCollection 2022.
2
Deep Learning With Spiking Neurons: Opportunities and Challenges.基于脉冲神经元的深度学习:机遇与挑战。
Front Neurosci. 2018 Oct 25;12:774. doi: 10.3389/fnins.2018.00774. eCollection 2018.
3
Memristors for Neuromorphic Circuits and Artificial Intelligence Applications.用于神经形态电路和人工智能应用的忆阻器
Materials (Basel). 2020 Feb 20;13(4):938. doi: 10.3390/ma13040938.
4
Data-driven artificial and spiking neural networks for inverse kinematics in neurorobotics.用于神经机器人学中逆运动学的数据驱动人工神经网络和脉冲神经网络
Patterns (N Y). 2021 Nov 18;3(1):100391. doi: 10.1016/j.patter.2021.100391. eCollection 2022 Jan 14.
5
Mapping and Validating a Point Neuron Model on Intel's Neuromorphic Hardware Loihi.在英特尔神经形态硬件Loihi上映射和验证点神经元模型。
Front Neurosci. 2022 May 30;16:883360. doi: 10.3389/fnins.2022.883360. eCollection 2022.
6
Mapping and Validating a Point Neuron Model on Intel's Neuromorphic Hardware Loihi.在英特尔神经形态硬件Loihi上映射和验证点神经元模型。
Front Neuroinform. 2022 May 30;16:883360. doi: 10.3389/fninf.2022.883360. eCollection 2022.
7
Supervised Learning in All FeFET-Based Spiking Neural Network: Opportunities and Challenges.基于全铁电场效应晶体管的脉冲神经网络中的监督学习:机遇与挑战。
Front Neurosci. 2020 Jun 24;14:634. doi: 10.3389/fnins.2020.00634. eCollection 2020.
8
Design Space Exploration of Hardware Spiking Neurons for Embedded Artificial Intelligence.硬件尖峰神经元在嵌入式人工智能中的设计空间探索。
Neural Netw. 2020 Jan;121:366-386. doi: 10.1016/j.neunet.2019.09.024. Epub 2019 Sep 26.
9
Neural and Synaptic Array Transceiver: A Brain-Inspired Computing Framework for Embedded Learning.神经与突触阵列收发器:一种用于嵌入式学习的受大脑启发的计算框架。
Front Neurosci. 2018 Aug 29;12:583. doi: 10.3389/fnins.2018.00583. eCollection 2018.
10
Neuromorphic NEF-Based Inverse Kinematics and PID Control.基于神经形态网络编码框架(NEF)的逆运动学与比例-积分-微分(PID)控制
Front Neurorobot. 2021 Feb 3;15:631159. doi: 10.3389/fnbot.2021.631159. eCollection 2021.

引用本文的文献

1
Agreeing to Stop: Reliable Latency-Adaptive Decision Making via Ensembles of Spiking Neural Networks.同意停止:通过脉冲神经网络集成实现可靠的延迟自适应决策
Entropy (Basel). 2024 Jan 31;26(2):126. doi: 10.3390/e26020126.

本文引用的文献

1
Memory-inspired spiking hyperdimensional network for robust online learning.受记忆启发的尖峰超维网络,用于稳健的在线学习。
Sci Rep. 2022 May 10;12(1):7641. doi: 10.1038/s41598-022-11073-3.
2
Multisample Online Learning for Probabilistic Spiking Neural Networks.概率脉冲神经网络的多样本在线学习
IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):2034-2044. doi: 10.1109/TNNLS.2022.3144296. Epub 2022 May 2.
3
Synaptic metaplasticity in binarized neural networks.二值化神经网络中的突触型变异性。
Nat Commun. 2021 May 5;12(1):2549. doi: 10.1038/s41467-021-22768-y.
4
Synaptic plasticity as Bayesian inference.突触可塑性作为贝叶斯推理。
Nat Neurosci. 2021 Apr;24(4):565-571. doi: 10.1038/s41593-021-00809-5. Epub 2021 Mar 11.
5
A solution to the learning dilemma for recurrent networks of spiking neurons.用于尖峰神经元递归网络的学习困境的解决方案。
Nat Commun. 2020 Jul 17;11(1):3625. doi: 10.1038/s41467-020-17236-y.
6
Synaptic Plasticity Dynamics for Deep Continuous Local Learning (DECOLLE).深度连续局部学习(DECOLLE)的突触可塑性动力学
Front Neurosci. 2020 May 12;14:424. doi: 10.3389/fnins.2020.00424. eCollection 2020.
7
Continual Learning Through Synaptic Intelligence.通过突触智能进行持续学习。
Proc Mach Learn Res. 2017;70:3987-3995.
8
Continual lifelong learning with neural networks: A review.神经网络的持续终身学习:综述。
Neural Netw. 2019 May;113:54-71. doi: 10.1016/j.neunet.2019.01.012. Epub 2019 Feb 6.
9
Eligibility Traces and Plasticity on Behavioral Time Scales: Experimental Support of NeoHebbian Three-Factor Learning Rules.行为时间尺度上的资格痕迹和可塑性:新海比尔三因素学习规则的实验支持。
Front Neural Circuits. 2018 Jul 31;12:53. doi: 10.3389/fncir.2018.00053. eCollection 2018.
10
Hierarchical Bayesian Inference and Learning in Spiking Neural Networks.分层贝叶斯推断和尖峰神经网络中的学习。
IEEE Trans Cybern. 2019 Jan;49(1):133-145. doi: 10.1109/TCYB.2017.2768554. Epub 2017 Nov 9.