• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种基于可重构神经形态硬件的散射与聚集脉冲卷积神经网络。

A Scatter-and-Gather Spiking Convolutional Neural Network on a Reconfigurable Neuromorphic Hardware.

作者信息

Zou Chenglong, Cui Xiaoxin, Kuang Yisong, Liu Kefei, Wang Yuan, Wang Xinan, Huang Ru

机构信息

Institute of Microelectronics, Peking University, Beijing, China.

School of ECE, Peking University Shenzhen Graduate School, Shenzhen, China.

出版信息

Front Neurosci. 2021 Nov 16;15:694170. doi: 10.3389/fnins.2021.694170. eCollection 2021.

DOI:10.3389/fnins.2021.694170
PMID:34867142
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8636746/
Abstract

Artificial neural networks (ANNs), like convolutional neural networks (CNNs), have achieved the state-of-the-art results for many machine learning tasks. However, inference with large-scale full-precision CNNs must cause substantial energy consumption and memory occupation, which seriously hinders their deployment on mobile and embedded systems. Highly inspired from biological brain, spiking neural networks (SNNs) are emerging as new solutions because of natural superiority in brain-like learning and great energy efficiency with event-driven communication and computation. Nevertheless, training a deep SNN remains a main challenge and there is usually a big accuracy gap between ANNs and SNNs. In this paper, we introduce a hardware-friendly conversion algorithm called "scatter-and-gather" to convert quantized ANNs to lossless SNNs, where neurons are connected with ternary {-1,0,1} synaptic weights. Each spiking neuron is stateless and more like original McCulloch and Pitts model, because it fires at most one spike and need be reset at each time step. Furthermore, we develop an incremental mapping framework to demonstrate efficient network deployments on a reconfigurable neuromorphic chip. Experimental results show our spiking LeNet on MNIST and VGG-Net on CIFAR-10 datasetobtain 99.37% and 91.91% classification accuracy, respectively. Besides, the presented mapping algorithm manages network deployment on our neuromorphic chip with maximum resource efficiency and excellent flexibility. Our four-spike LeNet and VGG-Net on chip can achieve respective real-time inference speed of 0.38 ms/image, 3.24 ms/image, and an average power consumption of 0.28 mJ/image and 2.3 mJ/image at 0.9 V, 252 MHz, which is nearly two orders of magnitude more efficient than traditional GPUs.

摘要

人工神经网络(ANNs)与卷积神经网络(CNNs)一样,在许多机器学习任务中都取得了最先进的成果。然而,大规模全精度CNN的推理必然会导致大量的能量消耗和内存占用,这严重阻碍了它们在移动和嵌入式系统上的部署。受生物大脑的启发,脉冲神经网络(SNNs)作为新的解决方案正在兴起,因为它们在类脑学习方面具有天然优势,并且通过事件驱动的通信和计算具有很高的能源效率。尽管如此,训练深度SNN仍然是一个主要挑战,并且ANN和SNN之间通常存在很大的准确率差距。在本文中,我们介绍了一种名为“散射与聚集”的硬件友好型转换算法,用于将量化的ANN转换为无损SNN,其中神经元通过三元{-1,0,1}突触权重连接。每个脉冲神经元是无状态的,更类似于原始的麦卡洛克和皮茨模型,因为它最多发射一个脉冲,并且需要在每个时间步重置。此外,我们开发了一个增量映射框架,以展示在可重构神经形态芯片上的高效网络部署。实验结果表明,我们在MNIST数据集上的脉冲LeNet和在CIFAR-10数据集上的VGG-Net分别获得了99.37%和91.91%的分类准确率。此外,所提出的映射算法以最大的资源效率和出色的灵活性管理我们神经形态芯片上的网络部署。我们在芯片上的四脉冲LeNet和VGG-Net可以分别实现0.38 ms/图像、3.24 ms/图像的实时推理速度,在0.9 V、252 MHz时的平均功耗分别为0.28 mJ/图像和2.3 mJ/图像,这比传统GPU的效率高出近两个数量级。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b99/8636746/e406ffe414c8/fnins-15-694170-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b99/8636746/f799629fa291/fnins-15-694170-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b99/8636746/2e1d00894cb3/fnins-15-694170-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b99/8636746/8882ac5e48d9/fnins-15-694170-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b99/8636746/1ab9d2e04db0/fnins-15-694170-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b99/8636746/b3f4d74605b1/fnins-15-694170-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b99/8636746/cafb23bfe4c3/fnins-15-694170-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b99/8636746/fc65ba5f5835/fnins-15-694170-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b99/8636746/b6a872dc6ff2/fnins-15-694170-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b99/8636746/7224678ebf6d/fnins-15-694170-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b99/8636746/3183be0ffc95/fnins-15-694170-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b99/8636746/e406ffe414c8/fnins-15-694170-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b99/8636746/f799629fa291/fnins-15-694170-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b99/8636746/2e1d00894cb3/fnins-15-694170-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b99/8636746/8882ac5e48d9/fnins-15-694170-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b99/8636746/1ab9d2e04db0/fnins-15-694170-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b99/8636746/b3f4d74605b1/fnins-15-694170-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b99/8636746/cafb23bfe4c3/fnins-15-694170-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b99/8636746/fc65ba5f5835/fnins-15-694170-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b99/8636746/b6a872dc6ff2/fnins-15-694170-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b99/8636746/7224678ebf6d/fnins-15-694170-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b99/8636746/3183be0ffc95/fnins-15-694170-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2b99/8636746/e406ffe414c8/fnins-15-694170-g0011.jpg

相似文献

1
A Scatter-and-Gather Spiking Convolutional Neural Network on a Reconfigurable Neuromorphic Hardware.一种基于可重构神经形态硬件的散射与聚集脉冲卷积神经网络。
Front Neurosci. 2021 Nov 16;15:694170. doi: 10.3389/fnins.2021.694170. eCollection 2021.
2
Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification.将连续值深度网络转换为用于图像分类的高效事件驱动网络
Front Neurosci. 2017 Dec 7;11:682. doi: 10.3389/fnins.2017.00682. eCollection 2017.
3
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.SSTDP:用于高效脉冲神经网络训练的监督式脉冲时间依赖可塑性
Front Neurosci. 2021 Nov 4;15:756876. doi: 10.3389/fnins.2021.756876. eCollection 2021.
4
Exploring Optimized Spiking Neural Network Architectures for Classification Tasks on Embedded Platforms.探索用于嵌入式平台上分类任务的优化尖峰神经网络架构。
Sensors (Basel). 2021 May 7;21(9):3240. doi: 10.3390/s21093240.
5
A TTFS-based energy and utilization efficient neuromorphic CNN accelerator.一种基于时间到第一个尖峰(TTFS)的能量与利用率高效的神经形态卷积神经网络加速器。
Front Neurosci. 2023 May 5;17:1121592. doi: 10.3389/fnins.2023.1121592. eCollection 2023.
6
On-Chip Training Spiking Neural Networks Using Approximated Backpropagation With Analog Synaptic Devices.使用带有模拟突触器件的近似反向传播的片上训练脉冲神经网络。
Front Neurosci. 2020 Jul 7;14:423. doi: 10.3389/fnins.2020.00423. eCollection 2020.
7
Spiking CMOS-NVM mixed-signal neuromorphic ConvNet with circuit- and training-optimized temporal subsampling.具有电路和训练优化时间下采样的尖峰CMOS-NVM混合信号神经形态卷积网络
Front Neurosci. 2023 Jul 18;17:1177592. doi: 10.3389/fnins.2023.1177592. eCollection 2023.
8
Neuromorphic Sentiment Analysis Using Spiking Neural Networks.基于尖峰神经网络的神经形态情绪分析。
Sensors (Basel). 2023 Sep 6;23(18):7701. doi: 10.3390/s23187701.
9
Supervised Learning in All FeFET-Based Spiking Neural Network: Opportunities and Challenges.基于全铁电场效应晶体管的脉冲神经网络中的监督学习:机遇与挑战。
Front Neurosci. 2020 Jun 24;14:634. doi: 10.3389/fnins.2020.00634. eCollection 2020.
10
Effective Plug-Ins for Reducing Inference-Latency of Spiking Convolutional Neural Networks During Inference Phase.用于在推理阶段降低脉冲卷积神经网络推理延迟的有效插件
Front Comput Neurosci. 2021 Oct 18;15:697469. doi: 10.3389/fncom.2021.697469. eCollection 2021.

引用本文的文献

1
Fine spatial-temporal density mapping with optimized approaches for many-core system.针对多核系统采用优化方法的精细时空密度映射。
Front Neurosci. 2025 Apr 3;19:1512926. doi: 10.3389/fnins.2025.1512926. eCollection 2025.
2
An all integer-based spiking neural network with dynamic threshold adaptation.一种具有动态阈值自适应的全整数型脉冲神经网络。
Front Neurosci. 2024 Dec 17;18:1449020. doi: 10.3389/fnins.2024.1449020. eCollection 2024.
3
Critically synchronized brain waves form an effective, robust and flexible basis for human memory and learning.

本文引用的文献

1
Efficient Spike-Driven Learning With Dendritic Event-Based Processing.基于树突事件处理的高效尖峰驱动学习
Front Neurosci. 2021 Feb 19;15:601109. doi: 10.3389/fnins.2021.601109. eCollection 2021.
2
BiCoSS: Toward Large-Scale Cognition Brain With Multigranular Neuromorphic Architecture.BiCoSS:具有多粒度神经形态架构的大规模认知脑。
IEEE Trans Neural Netw Learn Syst. 2022 Jul;33(7):2801-2815. doi: 10.1109/TNNLS.2020.3045492. Epub 2022 Jul 6.
3
Enabling Spike-Based Backpropagation for Training Deep Neural Network Architectures.
关键同步脑波为人类记忆和学习提供了有效、强大且灵活的基础。
Sci Rep. 2023 Mar 16;13(1):4343. doi: 10.1038/s41598-023-31365-6.
实现基于尖峰的反向传播以训练深度神经网络架构。
Front Neurosci. 2020 Feb 28;14:119. doi: 10.3389/fnins.2020.00119. eCollection 2020.
4
Spatial Properties of STDP in a Self-Learning Spiking Neural Network Enable Controlling a Mobile Robot.自学习脉冲神经网络中突触可塑性的空间特性助力移动机器人控制
Front Neurosci. 2020 Feb 26;14:88. doi: 10.3389/fnins.2020.00088. eCollection 2020.
5
Deep learning in spiking neural networks.深度学习在尖峰神经网络中的应用。
Neural Netw. 2019 Mar;111:47-63. doi: 10.1016/j.neunet.2018.12.002. Epub 2018 Dec 18.
6
Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks.用于训练高性能脉冲神经网络的时空反向传播
Front Neurosci. 2018 May 23;12:331. doi: 10.3389/fnins.2018.00331. eCollection 2018.
7
Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification.将连续值深度网络转换为用于图像分类的高效事件驱动网络
Front Neurosci. 2017 Dec 7;11:682. doi: 10.3389/fnins.2017.00682. eCollection 2017.
8
Training Deep Spiking Neural Networks Using Backpropagation.使用反向传播训练深度脉冲神经网络。
Front Neurosci. 2016 Nov 8;10:508. doi: 10.3389/fnins.2016.00508. eCollection 2016.
9
Convolutional networks for fast, energy-efficient neuromorphic computing.用于快速、节能神经形态计算的卷积网络。
Proc Natl Acad Sci U S A. 2016 Oct 11;113(41):11441-11446. doi: 10.1073/pnas.1604850113. Epub 2016 Sep 20.
10
The tempotron: a neuron that learns spike timing-based decisions.暂态神经膜:一种基于脉冲时间进行决策学习的神经元。
Nat Neurosci. 2006 Mar;9(3):420-8. doi: 10.1038/nn1643. Epub 2006 Feb 12.