• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

具有时间编码的脉冲自动编码器

Spiking Autoencoders With Temporal Coding.

作者信息

Comşa Iulia-Maria, Versari Luca, Fischbacher Thomas, Alakuijala Jyrki

机构信息

Google Research, Zürich, Switzerland.

出版信息

Front Neurosci. 2021 Aug 13;15:712667. doi: 10.3389/fnins.2021.712667. eCollection 2021.

DOI:10.3389/fnins.2021.712667
PMID:34483829
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8414972/
Abstract

Spiking neural networks with temporal coding schemes process information based on the relative timing of neuronal spikes. In supervised learning tasks, temporal coding allows learning through backpropagation with exact derivatives, and achieves accuracies on par with conventional artificial neural networks. Here we introduce spiking autoencoders with temporal coding and pulses, trained using backpropagation to store and reconstruct images with high fidelity from compact representations. We show that spiking autoencoders with a single layer are able to effectively represent and reconstruct images from the neuromorphically-encoded MNIST and FMNIST datasets. We explore the effect of different spike time target latencies, data noise levels and embedding sizes, as well as the classification performance from the embeddings. The spiking autoencoders achieve results similar to or better than conventional non-spiking autoencoders. We find that inhibition is essential in the functioning of the spiking autoencoders, particularly when the input needs to be memorised for a longer time before the expected output spike times. To reconstruct images with a high target latency, the network learns to accumulate negative evidence and to use the pulses as excitatory triggers for producing the output spikes at the required times. Our results highlight the potential of spiking autoencoders as building blocks for more complex biologically-inspired architectures. We also provide open-source code for the model.

摘要

采用时间编码方案的脉冲神经网络基于神经元脉冲的相对时间来处理信息。在监督学习任务中,时间编码允许通过具有精确导数的反向传播进行学习,并实现与传统人工神经网络相当的准确率。在此,我们引入了具有时间编码和脉冲的脉冲自动编码器,其通过反向传播进行训练,以从紧凑表示中高保真地存储和重建图像。我们表明,单层脉冲自动编码器能够有效地从神经形态编码的MNIST和FMNIST数据集中表示和重建图像。我们探讨了不同脉冲时间目标延迟、数据噪声水平和嵌入大小的影响,以及来自嵌入的分类性能。脉冲自动编码器取得了与传统非脉冲自动编码器相似或更好的结果。我们发现抑制在脉冲自动编码器的功能中至关重要,特别是当输入需要在预期输出脉冲时间之前被更长时间记忆时。为了以高目标延迟重建图像,网络学会积累负面证据,并将脉冲用作兴奋性触发器,以便在所需时间产生输出脉冲。我们的结果突出了脉冲自动编码器作为构建更复杂的受生物启发架构的构建块的潜力。我们还提供了该模型的开源代码。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7048/8414972/517635ace4c7/fnins-15-712667-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7048/8414972/4bb17094a778/fnins-15-712667-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7048/8414972/f35ef88e91f6/fnins-15-712667-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7048/8414972/5cdfc02dadbe/fnins-15-712667-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7048/8414972/4c23c12c5f62/fnins-15-712667-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7048/8414972/39f933c227e1/fnins-15-712667-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7048/8414972/6173fda2c968/fnins-15-712667-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7048/8414972/e5a80ebdf0d1/fnins-15-712667-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7048/8414972/858c175585a6/fnins-15-712667-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7048/8414972/364934c5c094/fnins-15-712667-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7048/8414972/a2406395f11d/fnins-15-712667-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7048/8414972/517635ace4c7/fnins-15-712667-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7048/8414972/4bb17094a778/fnins-15-712667-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7048/8414972/f35ef88e91f6/fnins-15-712667-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7048/8414972/5cdfc02dadbe/fnins-15-712667-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7048/8414972/4c23c12c5f62/fnins-15-712667-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7048/8414972/39f933c227e1/fnins-15-712667-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7048/8414972/6173fda2c968/fnins-15-712667-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7048/8414972/e5a80ebdf0d1/fnins-15-712667-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7048/8414972/858c175585a6/fnins-15-712667-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7048/8414972/364934c5c094/fnins-15-712667-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7048/8414972/a2406395f11d/fnins-15-712667-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7048/8414972/517635ace4c7/fnins-15-712667-g0011.jpg

相似文献

1
Spiking Autoencoders With Temporal Coding.具有时间编码的脉冲自动编码器
Front Neurosci. 2021 Aug 13;15:712667. doi: 10.3389/fnins.2021.712667. eCollection 2021.
2
Temporal Coding in Spiking Neural Networks With Alpha Synaptic Function: Learning With Backpropagation.尖峰神经网络中的时间编码与阿尔法突触功能:基于反向传播的学习。
IEEE Trans Neural Netw Learn Syst. 2022 Oct;33(10):5939-5952. doi: 10.1109/TNNLS.2021.3071976. Epub 2022 Oct 5.
3
Synthesizing Images From Spatio-Temporal Representations Using Spike-Based Backpropagation.使用基于脉冲的反向传播从时空表示中合成图像。
Front Neurosci. 2019 Jun 18;13:621. doi: 10.3389/fnins.2019.00621. eCollection 2019.
4
Supervised Learning With First-to-Spike Decoding in Multilayer Spiking Neural Networks.多层脉冲神经网络中基于首次脉冲解码的监督学习
Front Comput Neurosci. 2021 Apr 12;15:617862. doi: 10.3389/fncom.2021.617862. eCollection 2021.
5
High-performance deep spiking neural networks with 0.3 spikes per neuron.每个神经元有0.3个脉冲的高性能深度脉冲神经网络。
Nat Commun. 2024 Aug 9;15(1):6793. doi: 10.1038/s41467-024-51110-5.
6
First-spike coding promotes accurate and efficient spiking neural networks for discrete events with rich temporal structures.首次尖峰编码促进了用于具有丰富时间结构的离散事件的精确且高效的尖峰神经网络。
Front Neurosci. 2023 Oct 2;17:1266003. doi: 10.3389/fnins.2023.1266003. eCollection 2023.
7
Enhanced representation learning with temporal coding in sparsely spiking neural networks.稀疏脉冲神经网络中基于时间编码的增强表示学习
Front Comput Neurosci. 2023 Nov 21;17:1250908. doi: 10.3389/fncom.2023.1250908. eCollection 2023.
8
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.SSTDP:用于高效脉冲神经网络训练的监督式脉冲时间依赖可塑性
Front Neurosci. 2021 Nov 4;15:756876. doi: 10.3389/fnins.2021.756876. eCollection 2021.
9
Supervised Learning Based on Temporal Coding in Spiking Neural Networks.基于脉冲神经网络中时间编码的监督学习。
IEEE Trans Neural Netw Learn Syst. 2018 Jul;29(7):3227-3235. doi: 10.1109/TNNLS.2017.2726060. Epub 2017 Aug 1.
10
Supervised Learning in Multilayer Spiking Neural Networks With Spike Temporal Error Backpropagation.监督学习在基于尖峰时间误差反向传播的多层尖峰神经网络中的应用。
IEEE Trans Neural Netw Learn Syst. 2023 Dec;34(12):10141-10153. doi: 10.1109/TNNLS.2022.3164930. Epub 2023 Nov 30.

引用本文的文献

1
Direct training high-performance deep spiking neural networks: a review of theories and methods.直接训练高性能深度脉冲神经网络:理论与方法综述
Front Neurosci. 2024 Jul 31;18:1383844. doi: 10.3389/fnins.2024.1383844. eCollection 2024.
2
Efficient and generalizable cross-patient epileptic seizure detection through a spiking neural network.通过脉冲神经网络实现高效且可推广的跨患者癫痫发作检测。
Front Neurosci. 2024 Jan 10;17:1303564. doi: 10.3389/fnins.2023.1303564. eCollection 2023.
3
SPIDEN: deep Spiking Neural Networks for efficient image denoising.

本文引用的文献

1
Rectified Linear Postsynaptic Potential Function for Backpropagation in Deep Spiking Neural Networks.深度脉冲神经网络中用于反向传播的整流线性突触后电位函数
IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):1947-1958. doi: 10.1109/TNNLS.2021.3110991. Epub 2022 May 2.
2
A Supervised Learning Algorithm for Multilayer Spiking Neural Networks Based on Temporal Coding Toward Energy-Efficient VLSI Processor Design.一种基于时间编码的多层脉冲神经网络的监督学习算法,用于面向高能效超大规模集成电路处理器设计
IEEE Trans Neural Netw Learn Syst. 2023 Jan;34(1):394-408. doi: 10.1109/TNNLS.2021.3095068. Epub 2023 Jan 5.
3
Temporal Coding in Spiking Neural Networks With Alpha Synaptic Function: Learning With Backpropagation.
SPIDEN:用于高效图像去噪的深度脉冲神经网络。
Front Neurosci. 2023 Aug 11;17:1224457. doi: 10.3389/fnins.2023.1224457. eCollection 2023.
4
VTSNN: a virtual temporal spiking neural network.VTSNN:一种虚拟时间脉冲神经网络。
Front Neurosci. 2023 May 23;17:1091097. doi: 10.3389/fnins.2023.1091097. eCollection 2023.
尖峰神经网络中的时间编码与阿尔法突触功能:基于反向传播的学习。
IEEE Trans Neural Netw Learn Syst. 2022 Oct;33(10):5939-5952. doi: 10.1109/TNNLS.2021.3071976. Epub 2022 Oct 5.
4
Visualizing a joint future of neuroscience and neuromorphic engineering.可视化神经科学和神经形态工程的共同未来。
Neuron. 2021 Feb 17;109(4):571-575. doi: 10.1016/j.neuron.2021.01.009.
5
Will We Ever Have Conscious Machines?我们会拥有有意识的机器吗?
Front Comput Neurosci. 2020 Dec 22;14:556544. doi: 10.3389/fncom.2020.556544. eCollection 2020.
6
Temporal Backpropagation for Spiking Neural Networks with One Spike per Neuron.神经元每次仅发放一个尖峰的尖峰神经网络的时间反向传播。
Int J Neural Syst. 2020 Jun;30(6):2050027. doi: 10.1142/S0129065720500276. Epub 2020 May 28.
7
Synthesizing Images From Spatio-Temporal Representations Using Spike-Based Backpropagation.使用基于脉冲的反向传播从时空表示中合成图像。
Front Neurosci. 2019 Jun 18;13:621. doi: 10.3389/fnins.2019.00621. eCollection 2019.
8
Training Spiking Neural Networks for Cognitive Tasks: A Versatile Framework Compatible With Various Temporal Codes.针对认知任务的尖峰神经网络训练:一个与各种时间编码兼容的通用框架。
IEEE Trans Neural Netw Learn Syst. 2020 Apr;31(4):1285-1296. doi: 10.1109/TNNLS.2019.2919662. Epub 2019 Jun 21.
9
STDP-based spiking deep convolutional neural networks for object recognition.基于 STDP 的尖峰深度卷积神经网络的目标识别。
Neural Netw. 2018 Mar;99:56-67. doi: 10.1016/j.neunet.2017.12.005. Epub 2017 Dec 23.
10
Supervised Learning Based on Temporal Coding in Spiking Neural Networks.基于脉冲神经网络中时间编码的监督学习。
IEEE Trans Neural Netw Learn Syst. 2018 Jul;29(7):3227-3235. doi: 10.1109/TNNLS.2017.2726060. Epub 2017 Aug 1.