• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用基于脉冲的反向传播从时空表示中合成图像。

Synthesizing Images From Spatio-Temporal Representations Using Spike-Based Backpropagation.

作者信息

Roy Deboleena, Panda Priyadarshini, Roy Kaushik

机构信息

Department of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, United States.

出版信息

Front Neurosci. 2019 Jun 18;13:621. doi: 10.3389/fnins.2019.00621. eCollection 2019.

DOI:10.3389/fnins.2019.00621
PMID:31316331
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6611397/
Abstract

Spiking neural networks (SNNs) offer a promising alternative to current artificial neural networks to enable low-power event-driven neuromorphic hardware. Spike-based neuromorphic applications require processing and extracting meaningful information from spatio-temporal data, represented as series of spike trains over time. In this paper, we propose a method to synthesize images from multiple modalities in a spike-based environment. We use spiking auto-encoders to convert image and audio inputs into compact spatio-temporal representations that is then decoded for image synthesis. For this, we use a direct training algorithm that computes loss on the membrane potential of the output layer and back-propagates it by using a sigmoid approximation of the neuron's activation function to enable differentiability. The spiking autoencoders are benchmarked on MNIST and Fashion-MNIST and achieve very low reconstruction loss, comparable to ANNs. Then, spiking autoencoders are trained to learn meaningful spatio-temporal representations of the data, across the two modalities-audio and visual. We synthesize images from audio in a spike-based environment by first generating, and then utilizing such shared multi-modal spatio-temporal representations. Our audio to image synthesis model is tested on the task of converting TI-46 digits audio samples to MNIST images. We are able to synthesize images with high fidelity and the model achieves competitive performance against ANNs.

摘要

脉冲神经网络(SNN)为当前的人工神经网络提供了一种有前景的替代方案,以实现低功耗事件驱动的神经形态硬件。基于脉冲的神经形态应用需要从时空数据中处理和提取有意义的信息,这些数据表示为随时间变化的一系列脉冲序列。在本文中,我们提出了一种在基于脉冲的环境中从多种模态合成图像的方法。我们使用脉冲自动编码器将图像和音频输入转换为紧凑的时空表示,然后对其进行解码以进行图像合成。为此,我们使用一种直接训练算法,该算法计算输出层膜电位上的损失,并通过使用神经元激活函数的 sigmoid 近似来进行反向传播,以实现可微性。脉冲自动编码器在 MNIST 和 Fashion-MNIST 上进行了基准测试,并实现了非常低的重建损失,与人工神经网络相当。然后,训练脉冲自动编码器以学习跨音频和视觉这两种模态的数据的有意义的时空表示。我们通过首先生成然后利用这种共享的多模态时空表示,在基于脉冲的环境中从音频合成图像。我们的音频到图像合成模型在将 TI-46 数字音频样本转换为 MNIST 图像的任务上进行了测试。我们能够以高保真度合成图像,并且该模型在与人工神经网络的比较中取得了有竞争力的性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/872c/6611397/8e46ede8256a/fnins-13-00621-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/872c/6611397/a2538806b5a6/fnins-13-00621-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/872c/6611397/c65e4059485f/fnins-13-00621-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/872c/6611397/5bee55d33624/fnins-13-00621-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/872c/6611397/ad36eda8e77e/fnins-13-00621-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/872c/6611397/a2b4f787e76a/fnins-13-00621-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/872c/6611397/36afd43f1df9/fnins-13-00621-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/872c/6611397/cd43d3620cf7/fnins-13-00621-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/872c/6611397/5a5a77b9a6e1/fnins-13-00621-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/872c/6611397/b88be5dc6a4f/fnins-13-00621-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/872c/6611397/8e46ede8256a/fnins-13-00621-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/872c/6611397/a2538806b5a6/fnins-13-00621-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/872c/6611397/c65e4059485f/fnins-13-00621-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/872c/6611397/5bee55d33624/fnins-13-00621-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/872c/6611397/ad36eda8e77e/fnins-13-00621-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/872c/6611397/a2b4f787e76a/fnins-13-00621-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/872c/6611397/36afd43f1df9/fnins-13-00621-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/872c/6611397/cd43d3620cf7/fnins-13-00621-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/872c/6611397/5a5a77b9a6e1/fnins-13-00621-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/872c/6611397/b88be5dc6a4f/fnins-13-00621-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/872c/6611397/8e46ede8256a/fnins-13-00621-g0010.jpg

相似文献

1
Synthesizing Images From Spatio-Temporal Representations Using Spike-Based Backpropagation.使用基于脉冲的反向传播从时空表示中合成图像。
Front Neurosci. 2019 Jun 18;13:621. doi: 10.3389/fnins.2019.00621. eCollection 2019.
2
Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks.用于训练高性能脉冲神经网络的时空反向传播
Front Neurosci. 2018 May 23;12:331. doi: 10.3389/fnins.2018.00331. eCollection 2018.
3
Spiking Autoencoders With Temporal Coding.具有时间编码的脉冲自动编码器
Front Neurosci. 2021 Aug 13;15:712667. doi: 10.3389/fnins.2021.712667. eCollection 2021.
4
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.SSTDP:用于高效脉冲神经网络训练的监督式脉冲时间依赖可塑性
Front Neurosci. 2021 Nov 4;15:756876. doi: 10.3389/fnins.2021.756876. eCollection 2021.
5
Is Neuromorphic MNIST Neuromorphic? Analyzing the Discriminative Power of Neuromorphic Datasets in the Time Domain.神经形态MNIST数据集是神经形态的吗?分析神经形态数据集在时域中的判别能力。
Front Neurosci. 2021 Mar 25;15:608567. doi: 10.3389/fnins.2021.608567. eCollection 2021.
6
Spike-Train Level Direct Feedback Alignment: Sidestepping Backpropagation for On-Chip Training of Spiking Neural Nets.尖峰序列水平直接反馈对齐:用于脉冲神经网络片上训练的避开反向传播方法
Front Neurosci. 2020 Mar 13;14:143. doi: 10.3389/fnins.2020.00143. eCollection 2020.
7
A TTFS-based energy and utilization efficient neuromorphic CNN accelerator.一种基于时间到第一个尖峰(TTFS)的能量与利用率高效的神经形态卷积神经网络加速器。
Front Neurosci. 2023 May 5;17:1121592. doi: 10.3389/fnins.2023.1121592. eCollection 2023.
8
Efficient Processing of Spatio-Temporal Data Streams With Spiking Neural Networks.基于脉冲神经网络的时空数据流高效处理
Front Neurosci. 2020 May 5;14:439. doi: 10.3389/fnins.2020.00439. eCollection 2020.
9
Enabling Spike-Based Backpropagation for Training Deep Neural Network Architectures.实现基于尖峰的反向传播以训练深度神经网络架构。
Front Neurosci. 2020 Feb 28;14:119. doi: 10.3389/fnins.2020.00119. eCollection 2020.
10
Training Deep Spiking Neural Networks Using Backpropagation.使用反向传播训练深度脉冲神经网络。
Front Neurosci. 2016 Nov 8;10:508. doi: 10.3389/fnins.2016.00508. eCollection 2016.

引用本文的文献

1
Analysis and knowledge extraction of newborn resuscitation activities from annotation files.新生儿复苏活动的分析与注释文件中知识提取。
BMC Med Inform Decis Mak. 2024 Nov 5;24(1):327. doi: 10.1186/s12911-024-02736-4.
2
Mutual information measure of visual perception based on noisy spiking neural networks.基于噪声脉冲神经网络的视觉感知互信息测度
Front Neurosci. 2023 Aug 16;17:1155362. doi: 10.3389/fnins.2023.1155362. eCollection 2023.
3
SPIDEN: deep Spiking Neural Networks for efficient image denoising.SPIDEN:用于高效图像去噪的深度脉冲神经网络。

本文引用的文献

1
Representation learning using event-based STDP.基于事件的 STDP 的表示学习。
Neural Netw. 2018 Sep;105:294-303. doi: 10.1016/j.neunet.2018.05.018. Epub 2018 Jun 1.
2
Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks.用于训练高性能脉冲神经网络的时空反向传播
Front Neurosci. 2018 May 23;12:331. doi: 10.3389/fnins.2018.00331. eCollection 2018.
3
Training Deep Spiking Neural Networks Using Backpropagation.使用反向传播训练深度脉冲神经网络。
Front Neurosci. 2023 Aug 11;17:1224457. doi: 10.3389/fnins.2023.1224457. eCollection 2023.
4
BlocTrain: Block-Wise Conditional Training and Inference for Efficient Spike-Based Deep Learning.BlocTrain:用于高效基于脉冲的深度学习的逐块条件训练与推理
Front Neurosci. 2021 Oct 29;15:603433. doi: 10.3389/fnins.2021.603433. eCollection 2021.
5
Spiking Autoencoders With Temporal Coding.具有时间编码的脉冲自动编码器
Front Neurosci. 2021 Aug 13;15:712667. doi: 10.3389/fnins.2021.712667. eCollection 2021.
Front Neurosci. 2016 Nov 8;10:508. doi: 10.3389/fnins.2016.00508. eCollection 2016.
4
Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons.镜像脉冲时间依赖可塑性在脉冲神经元网络中实现自动编码器学习。
PLoS Comput Biol. 2015 Dec 3;11(12):e1004566. doi: 10.1371/journal.pcbi.1004566. eCollection 2015 Dec.
5
Deep learning.深度学习。
Nature. 2015 May 28;521(7553):436-44. doi: 10.1038/nature14539.
6
Bayesian computation emerges in generic cortical microcircuits through spike-timing-dependent plasticity.贝叶斯计算通过依赖于尖峰时间的可塑性出现在一般的皮质微电路中。
PLoS Comput Biol. 2013 Apr;9(4):e1003037. doi: 10.1371/journal.pcbi.1003037. Epub 2013 Apr 25.
7
Avalanches in a stochastic model of spiking neurons.尖峰神经元随机模型中的雪崩现象。
PLoS Comput Biol. 2010 Jul 8;6(7):e1000846. doi: 10.1371/journal.pcbi.1000846.
8
Evolving spiking neural networks for audiovisual information processing.用于视听信息处理的演进尖峰神经网络。
Neural Netw. 2010 Sep;23(7):819-35. doi: 10.1016/j.neunet.2010.04.009. Epub 2010 May 5.
9
Spiking neural networks.脉冲神经网络。
Int J Neural Syst. 2009 Aug;19(4):295-308. doi: 10.1142/S0129065709002002.