• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

快速且准确的视觉刺激稀疏编码,采用简单、超低能耗的尖峰架构。

Fast and Accurate Sparse Coding of Visual Stimuli With a Simple, Ultralow-Energy Spiking Architecture.

出版信息

IEEE Trans Neural Netw Learn Syst. 2019 Jul;30(7):2173-2187. doi: 10.1109/TNNLS.2018.2878002. Epub 2018 Nov 20.

DOI:10.1109/TNNLS.2018.2878002
PMID:30475732
Abstract

Memristive crossbars have become a popular means for realizing unsupervised and supervised learning techniques. In previous neuromorphic architectures with leaky integrate-and-fire neurons, the crossbar itself has been separated from the neuron capacitors to preserve mathematical rigor. In this paper, we sought to design a simplified sparse coding circuit without this restriction, resulting in a fast circuit that approximated a sparse coding operation at a minimal loss in accuracy. We showed that connecting the neurons directly to the crossbar resulted in a more energy-efficient sparse coding architecture and alleviated the need to prenormalize receptive fields. This paper provides derivations for the design of such a network, named the simple spiking locally competitive algorithm, as well as CMOS designs and results on the CIFAR and MNIST data sets. Compared to a nonspiking, nonapproximate model which scored 33% on CIFAR-10 with a single-layer classifier, this hardware scored 32% accuracy. When used with a state-of-the-art deep learning classifier, the nonspiking model achieved 82% and our simplified, spiking model achieved 80% while compressing the input data by 92%. Compared to a previously proposed spiking model, our proposed hardware consumed 99% less energy to do the same work at 21 × the throughput. Accuracy held out with online learning to a write variance of 3%, suitable for the often reported 4-bit resolution required for neuromorphic algorithms, with offline learning to a write variance of 27%, and with read variance to 40%. The proposed architecture's excellent accuracy, throughput, and significantly lower energy usage demonstrate the utility of our innovations.

摘要

忆阻器交叉点已成为实现无监督和监督学习技术的一种流行手段。在以前具有漏电流积分和放电神经元的神经形态架构中,交叉点本身已经与神经元电容器分离,以保持数学严谨性。在本文中,我们试图设计一种没有这种限制的简化稀疏编码电路,从而得到一个快速电路,在最小精度损失的情况下近似稀疏编码操作。我们表明,将神经元直接连接到交叉点会导致更节能的稀疏编码架构,并减轻了对预归一化感受野的需求。本文提供了这种名为简单脉冲局部竞争算法的网络设计推导,以及 CMOS 设计和 CIFAR 和 MNIST 数据集上的结果。与使用单层分类器在 CIFAR-10 上得分为 33%的非脉冲、非近似模型相比,该硬件的准确率为 32%。当与最先进的深度学习分类器一起使用时,非脉冲模型的准确率为 82%,而我们简化的脉冲模型的准确率为 80%,同时将输入数据压缩了 92%。与之前提出的脉冲模型相比,我们提出的硬件在 21 倍的吞吐量下,以 99%的能耗完成相同的工作。在线学习的准确率保持在 3%的写入方差,适合神经形态算法通常需要的 4 位分辨率,离线学习的写入方差为 27%,读取方差为 40%。所提出的架构具有出色的准确性、吞吐量和显著降低的能耗,证明了我们创新的实用性。

相似文献

1
Fast and Accurate Sparse Coding of Visual Stimuli With a Simple, Ultralow-Energy Spiking Architecture.快速且准确的视觉刺激稀疏编码,采用简单、超低能耗的尖峰架构。
IEEE Trans Neural Netw Learn Syst. 2019 Jul;30(7):2173-2187. doi: 10.1109/TNNLS.2018.2878002. Epub 2018 Nov 20.
2
Spiking CMOS-NVM mixed-signal neuromorphic ConvNet with circuit- and training-optimized temporal subsampling.具有电路和训练优化时间下采样的尖峰CMOS-NVM混合信号神经形态卷积网络
Front Neurosci. 2023 Jul 18;17:1177592. doi: 10.3389/fnins.2023.1177592. eCollection 2023.
3
A 0.086-mm 12.7-pJ/SOP 64k-Synapse 256-Neuron Online-Learning Digital Spiking Neuromorphic Processor in 28-nm CMOS.在 28nmCMOS 中,实现了一款 0.086mm²、12.7pJ/SOP、64k 突触、256 神经元、在线学习、数字尖峰神经形态处理器。
IEEE Trans Biomed Circuits Syst. 2019 Feb;13(1):145-158. doi: 10.1109/TBCAS.2018.2880425. Epub 2018 Nov 9.
4
Biologically plausible deep learning - But how far can we go with shallow networks?生物学上合理的深度学习——但我们可以在浅层网络中走多远?
Neural Netw. 2019 Oct;118:90-101. doi: 10.1016/j.neunet.2019.06.001. Epub 2019 Jun 20.
5
Going Deeper in Spiking Neural Networks: VGG and Residual Architectures.深入探索脉冲神经网络:VGG和残差架构。
Front Neurosci. 2019 Mar 7;13:95. doi: 10.3389/fnins.2019.00095. eCollection 2019.
6
Photonic spiking neural networks with event-driven femtojoule optoelectronic neurons based on Izhikevich-inspired model.基于 Izhikevich 启发模型的具有事件驱动飞焦光电子神经元的光子尖峰神经网络。
Opt Express. 2022 May 23;30(11):19360-19389. doi: 10.1364/OE.449528.
7
Spiking neural networks for handwritten digit recognition-Supervised learning and network optimization.用于手写数字识别的尖峰神经网络-监督学习和网络优化。
Neural Netw. 2018 Jul;103:118-127. doi: 10.1016/j.neunet.2018.03.019. Epub 2018 Apr 6.
8
Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and Its Application to Sparse Coding.基于电阻式存储器交叉开关计算的能量缩放优势及其在稀疏编码中的应用。
Front Neurosci. 2016 Jan 6;9:484. doi: 10.3389/fnins.2015.00484. eCollection 2015.
9
Sparse coding with a somato-dendritic rule.具有树突规则的稀疏编码。
Neural Netw. 2020 Nov;131:37-49. doi: 10.1016/j.neunet.2020.06.007. Epub 2020 Jun 26.
10
STDP-based spiking deep convolutional neural networks for object recognition.基于 STDP 的尖峰深度卷积神经网络的目标识别。
Neural Netw. 2018 Mar;99:56-67. doi: 10.1016/j.neunet.2017.12.005. Epub 2017 Dec 23.

引用本文的文献

1
Energy-Efficient Neuromorphic Architectures for Nuclear Radiation Detection Applications.用于核辐射检测应用的节能神经形态架构。
Sensors (Basel). 2024 Mar 27;24(7):2144. doi: 10.3390/s24072144.