• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

具有电路和训练优化时间下采样的尖峰CMOS-NVM混合信号神经形态卷积网络

Spiking CMOS-NVM mixed-signal neuromorphic ConvNet with circuit- and training-optimized temporal subsampling.

作者信息

Dorzhigulov Anuar, Saxena Vishal

机构信息

AMPIC Lab, Department of Electrical and Electronic Engineering, University of Delaware, Newark, DE, United States.

出版信息

Front Neurosci. 2023 Jul 18;17:1177592. doi: 10.3389/fnins.2023.1177592. eCollection 2023.

DOI:10.3389/fnins.2023.1177592
PMID:37534034
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10390782/
Abstract

We increasingly rely on deep learning algorithms to process colossal amount of unstructured visual data. Commonly, these deep learning algorithms are deployed as software models on digital hardware, predominantly in data centers. Intrinsic high energy consumption of Cloud-based deployment of deep neural networks (DNNs) inspired researchers to look for alternatives, resulting in a high interest in Spiking Neural Networks (SNNs) and dedicated mixed-signal neuromorphic hardware. As a result, there is an emerging challenge to transfer DNN architecture functionality to energy-efficient spiking non-volatile memory (NVM)-based hardware with minimal loss in the accuracy of visual data processing. Convolutional Neural Network (CNN) is the staple choice of DNN for visual data processing. However, the lack of analog-friendly spiking implementations and alternatives for some core CNN functions, such as MaxPool, hinders the conversion of CNNs into the spike domain, thus hampering neuromorphic hardware development. To address this gap, in this work, we propose MaxPool with temporal multiplexing for Spiking CNNs (SCNNs), which is amenable for implementation in mixed-signal circuits. In this work, we leverage the temporal dynamics of internal membrane potential of Integrate & Fire neurons to enable MaxPool decision-making in the spiking domain. The proposed MaxPool models are implemented and tested within the SCNN architecture using a modified version of the aihwkit framework, a PyTorch-based toolkit for modeling and simulating hardware-based neural networks. The proposed spiking MaxPool scheme can decide even before the complete spatiotemporal input is applied, thus selectively trading off latency with accuracy. It is observed that by allocating just 10% of the spatiotemporal input window for a pooling decision, the proposed spiking MaxPool achieves up to 61.74% accuracy with a 2-bit weight resolution in the CIFAR10 dataset classification task after training with back propagation, with only about 1% performance drop compared to 62.78% accuracy of the 100% spatiotemporal window case with the 2-bit weight resolution to reflect foundry-integrated ReRAM limitations. In addition, we propose the realization of one of the proposed spiking MaxPool techniques in an NVM crossbar array along with periphery circuits designed in a 130nm CMOS technology. The energy-efficiency estimation results show competitive performance compared to recent neuromorphic chip designs.

摘要

我们越来越依赖深度学习算法来处理海量的非结构化视觉数据。通常,这些深度学习算法作为软件模型部署在数字硬件上,主要是在数据中心。基于云的深度神经网络(DNN)部署存在固有的高能耗问题,这促使研究人员寻找替代方案,从而引发了对脉冲神经网络(SNN)和专用混合信号神经形态硬件的高度关注。因此,出现了一个新的挑战,即如何将DNN架构功能转移到基于节能脉冲非易失性存储器(NVM)的硬件上,同时在视觉数据处理精度上的损失最小。卷积神经网络(CNN)是用于视觉数据处理的DNN的主要选择。然而,缺乏对模拟友好的脉冲实现方式以及一些核心CNN功能(如最大池化(MaxPool))的替代方案,阻碍了将CNN转换到脉冲域,从而妨碍了神经形态硬件的发展。为了弥补这一差距,在这项工作中,我们提出了用于脉冲卷积神经网络(SCNN)的具有时间复用的MaxPool,它适合在混合信号电路中实现。在这项工作中,我们利用积分发放神经元内部膜电位的时间动态特性,在脉冲域中实现MaxPool决策。所提出的MaxPool模型在SCNN架构内使用aihwkit框架的修改版本进行实现和测试,aihwkit是一个基于PyTorch的用于建模和模拟基于硬件的神经网络的工具包。所提出的脉冲MaxPool方案甚至可以在完整的时空输入应用之前就做出决策,从而在延迟和精度之间进行选择性权衡。据观察,通过仅为池化决策分配时空输入窗口的10%,所提出的脉冲MaxPool在使用反向传播训练后的CIFAR10数据集分类任务中,以2位权重分辨率实现了高达61.74%的准确率,与2位权重分辨率的100%时空窗口情况的62.78%准确率相比,性能仅下降约1%,以反映代工厂集成的电阻式随机存取存储器(ReRAM)的局限性。此外,我们提出在一个NVM交叉阵列中实现所提出的一种脉冲MaxPool技术,并结合采用130nm CMOS技术设计的外围电路。能效估计结果表明,与最近的神经形态芯片设计相比,具有竞争力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65c5/10390782/1a46fe90a75d/fnins-17-1177592-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65c5/10390782/4fb284de077c/fnins-17-1177592-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65c5/10390782/23c6b18caf33/fnins-17-1177592-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65c5/10390782/39fcdf190462/fnins-17-1177592-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65c5/10390782/17cd2a124fe3/fnins-17-1177592-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65c5/10390782/b55a067326e9/fnins-17-1177592-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65c5/10390782/c0c0ad015b79/fnins-17-1177592-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65c5/10390782/46bcdd042cb0/fnins-17-1177592-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65c5/10390782/99c12cc59edd/fnins-17-1177592-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65c5/10390782/edf26ec75cf9/fnins-17-1177592-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65c5/10390782/1a46fe90a75d/fnins-17-1177592-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65c5/10390782/4fb284de077c/fnins-17-1177592-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65c5/10390782/23c6b18caf33/fnins-17-1177592-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65c5/10390782/39fcdf190462/fnins-17-1177592-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65c5/10390782/17cd2a124fe3/fnins-17-1177592-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65c5/10390782/b55a067326e9/fnins-17-1177592-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65c5/10390782/c0c0ad015b79/fnins-17-1177592-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65c5/10390782/46bcdd042cb0/fnins-17-1177592-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65c5/10390782/99c12cc59edd/fnins-17-1177592-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65c5/10390782/edf26ec75cf9/fnins-17-1177592-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65c5/10390782/1a46fe90a75d/fnins-17-1177592-g0010.jpg

相似文献

1
Spiking CMOS-NVM mixed-signal neuromorphic ConvNet with circuit- and training-optimized temporal subsampling.具有电路和训练优化时间下采样的尖峰CMOS-NVM混合信号神经形态卷积网络
Front Neurosci. 2023 Jul 18;17:1177592. doi: 10.3389/fnins.2023.1177592. eCollection 2023.
2
Neuromorphic Sentiment Analysis Using Spiking Neural Networks.基于尖峰神经网络的神经形态情绪分析。
Sensors (Basel). 2023 Sep 6;23(18):7701. doi: 10.3390/s23187701.
3
A Scatter-and-Gather Spiking Convolutional Neural Network on a Reconfigurable Neuromorphic Hardware.一种基于可重构神经形态硬件的散射与聚集脉冲卷积神经网络。
Front Neurosci. 2021 Nov 16;15:694170. doi: 10.3389/fnins.2021.694170. eCollection 2021.
4
Supervised training of spiking neural networks for robust deployment on mixed-signal neuromorphic processors.监督 Spike 神经网络训练以实现对混合信号神经形态处理器的稳健部署。
Sci Rep. 2021 Dec 3;11(1):23376. doi: 10.1038/s41598-021-02779-x.
5
Supervised Learning in All FeFET-Based Spiking Neural Network: Opportunities and Challenges.基于全铁电场效应晶体管的脉冲神经网络中的监督学习:机遇与挑战。
Front Neurosci. 2020 Jun 24;14:634. doi: 10.3389/fnins.2020.00634. eCollection 2020.
6
Deep Learning With Spiking Neurons: Opportunities and Challenges.基于脉冲神经元的深度学习:机遇与挑战。
Front Neurosci. 2018 Oct 25;12:774. doi: 10.3389/fnins.2018.00774. eCollection 2018.
7
Analog Convolutional Operator Circuit for Low-Power Mixed-Signal CNN Processing Chip.用于低功耗混合信号卷积神经网络处理芯片的模拟卷积算子电路
Sensors (Basel). 2023 Dec 4;23(23):9612. doi: 10.3390/s23239612.
8
HFNet: A CNN Architecture Co-designed for Neuromorphic Hardware With a Crossbar Array of Synapses.HFNet:一种专为具有突触交叉阵列的神经形态硬件共同设计的卷积神经网络架构。
Front Neurosci. 2020 Oct 26;14:907. doi: 10.3389/fnins.2020.00907. eCollection 2020.
9
A Low-Power Spiking Neural Network Chip Based on a Compact LIF Neuron and Binary Exponential Charge Injector Synapse Circuits.基于紧凑型 LIF 神经元和二进制指数电荷注入突触电路的低功耗尖峰神经网络芯片。
Sensors (Basel). 2021 Jun 29;21(13):4462. doi: 10.3390/s21134462.
10
Spike-Train Level Direct Feedback Alignment: Sidestepping Backpropagation for On-Chip Training of Spiking Neural Nets.尖峰序列水平直接反馈对齐:用于脉冲神经网络片上训练的避开反向传播方法
Front Neurosci. 2020 Mar 13;14:143. doi: 10.3389/fnins.2020.00143. eCollection 2020.

本文引用的文献

1
Advancements in Algorithms and Neuromorphic Hardware for Spiking Neural Networks. Spike 神经网络算法和神经形态硬件的进展。
Neural Comput. 2022 May 19;34(6):1289-1328. doi: 10.1162/neco_a_01499.
2
Surrogate gradients for analog neuromorphic computing.模拟神经形态计算的替代梯度。
Proc Natl Acad Sci U S A. 2022 Jan 25;119(4). doi: 10.1073/pnas.2109194119.
3
DIET-SNN: A Low-Latency Spiking Neural Network With Direct Input Encoding and Leakage and Threshold Optimization.DIET-SNN:一种具有直接输入编码以及泄漏和阈值优化的低延迟脉冲神经网络。
IEEE Trans Neural Netw Learn Syst. 2023 Jun;34(6):3174-3182. doi: 10.1109/TNNLS.2021.3111897. Epub 2023 Jun 1.
4
Neural Coding in Spiking Neural Networks: A Comparative Study for Robust Neuromorphic Systems.脉冲神经网络中的神经编码:对鲁棒神经形态系统的比较研究
Front Neurosci. 2021 Mar 4;15:638474. doi: 10.3389/fnins.2021.638474. eCollection 2021.
5
HFNet: A CNN Architecture Co-designed for Neuromorphic Hardware With a Crossbar Array of Synapses.HFNet:一种专为具有突触交叉阵列的神经形态硬件共同设计的卷积神经网络架构。
Front Neurosci. 2020 Oct 26;14:907. doi: 10.3389/fnins.2020.00907. eCollection 2020.
6
Recursive Threshold Logic-A Bioinspired Reconfigurable Dynamic Logic System With Crossbar Arrays.递归门限逻辑——一种具有交叉矩阵的仿生物可重构动态逻辑系统。
IEEE Trans Biomed Circuits Syst. 2020 Dec;14(6):1311-1322. doi: 10.1109/TBCAS.2020.3027554. Epub 2020 Dec 31.
7
Memory devices and applications for in-memory computing.用于内存计算的存储设备和应用。
Nat Nanotechnol. 2020 Jul;15(7):529-544. doi: 10.1038/s41565-020-0655-z. Epub 2020 Mar 30.
8
SpykeTorch: Efficient Simulation of Convolutional Spiking Neural Networks With at Most One Spike per Neuron.SpykeTorch:每个神经元最多一个脉冲的卷积脉冲神经网络的高效模拟
Front Neurosci. 2019 Jul 12;13:625. doi: 10.3389/fnins.2019.00625. eCollection 2019.
9
NengoDL: Combining Deep Learning and Neuromorphic Modelling Methods.NengoDL:深度学习与神经形态建模方法的结合。
Neuroinformatics. 2019 Oct;17(4):611-628. doi: 10.1007/s12021-019-09424-z.
10
Going Deeper in Spiking Neural Networks: VGG and Residual Architectures.深入探索脉冲神经网络:VGG和残差架构。
Front Neurosci. 2019 Mar 7;13:95. doi: 10.3389/fnins.2019.00095. eCollection 2019.