• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

SNN-BERT:用于节能 BERT 的高能效脉冲神经网络。

SNN-BERT: Training-efficient Spiking Neural Networks for energy-efficient BERT.

机构信息

School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China; Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.

Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.

出版信息

Neural Netw. 2024 Dec;180:106630. doi: 10.1016/j.neunet.2024.106630. Epub 2024 Aug 20.

DOI:10.1016/j.neunet.2024.106630
PMID:39208467
Abstract

Spiking Neural Networks (SNNs) are naturally suited to process sequence tasks such as NLP with low power, due to its brain-inspired spatio-temporal dynamics and spike-driven nature. Current SNNs employ "repeat coding" that re-enter all input tokens at each timestep, which fails to fully exploit temporal relationships between the tokens and introduces memory overhead. In this work, we align the number of input tokens with the timestep and refer to this input coding as "individual coding". To cope with the increase in training time for individual encoded SNNs due to the dramatic increase in timesteps, we design a Bidirectional Parallel Spiking Neuron (BPSN) with following features: First, BPSN supports spike parallel computing and effectively avoids the issue of uninterrupted firing; Second, BPSN excels in handling adaptive sequence length tasks, which is a capability that existing work does not have; Third, the fusion of bidirectional information enhances the temporal information modeling capabilities of SNNs; To validate the effectiveness of our BPSN, we present the SNN-BERT, a deep direct training SNN architecture based on the BERT model in NLP. Compared to prior repeat 4-timestep coding baseline, our method achieves a 6.46× reduction in energy consumption and a significant 16.1% improvement, raising the performance upper bound of the SNN domain on the GLUE dataset to 74.4%. Additionally, our method achieves 3.5× training acceleration and 3.8× training memory optimization. Compared with artificial neural networks of similar architecture, we obtain comparable performance but up to 22.5× energy efficiency. We would provide the codes.

摘要

尖峰神经网络 (SNN) 因其受大脑启发的时空动力学和尖峰驱动特性,非常适合处理序列任务,例如具有低功耗的自然语言处理。目前的 SNN 采用“重复编码”,即在每个时间步重新输入所有输入令牌,这未能充分利用令牌之间的时间关系,并引入了内存开销。在这项工作中,我们将输入令牌的数量与时间步对齐,并将这种输入编码称为“个体编码”。为了应对由于时间步的急剧增加而导致个体编码 SNN 训练时间增加的问题,我们设计了一种具有以下特征的双向并行尖峰神经元 (BPSN):首先,BPSN 支持尖峰并行计算,有效地避免了连续发射的问题;其次,BPSN 擅长处理自适应序列长度任务,这是现有工作所不具备的能力;第三,双向信息的融合增强了 SNN 的时间信息建模能力。为了验证我们的 BPSN 的有效性,我们提出了 SNN-BERT,这是一种基于自然语言处理中 BERT 模型的深度直接训练 SNN 架构。与之前的重复 4 时间步编码基线相比,我们的方法将能量消耗降低了 6.46 倍,性能提高了 16.1%,将 SNN 领域在 GLUE 数据集上的性能上限提高到 74.4%。此外,我们的方法实现了 3.5 倍的训练加速和 3.8 倍的训练内存优化。与类似架构的人工神经网络相比,我们获得了相当的性能,但能源效率提高了 22.5 倍。我们将提供代码。

相似文献

1
SNN-BERT: Training-efficient Spiking Neural Networks for energy-efficient BERT.SNN-BERT:用于节能 BERT 的高能效脉冲神经网络。
Neural Netw. 2024 Dec;180:106630. doi: 10.1016/j.neunet.2024.106630. Epub 2024 Aug 20.
2
Multi-scale full spike pattern for semantic segmentation.多尺度全尖峰模式的语义分割。
Neural Netw. 2024 Aug;176:106330. doi: 10.1016/j.neunet.2024.106330. Epub 2024 Apr 20.
3
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.SSTDP:用于高效脉冲神经网络训练的监督式脉冲时间依赖可塑性
Front Neurosci. 2021 Nov 4;15:756876. doi: 10.3389/fnins.2021.756876. eCollection 2021.
4
Backpropagation-Based Learning Techniques for Deep Spiking Neural Networks: A Survey.基于反向传播的深度学习尖峰神经网络学习技术综述。
IEEE Trans Neural Netw Learn Syst. 2024 Sep;35(9):11906-11921. doi: 10.1109/TNNLS.2023.3263008. Epub 2024 Sep 3.
5
Rethinking the performance comparison between SNNS and ANNS.重新思考 SNNS 和 ANNS 的性能比较。
Neural Netw. 2020 Jan;121:294-307. doi: 10.1016/j.neunet.2019.09.005. Epub 2019 Sep 19.
6
High-performance deep spiking neural networks via at-most-two-spike exponential coding.基于最多两次尖峰的指数编码的高性能深度尖峰神经网络。
Neural Netw. 2024 Aug;176:106346. doi: 10.1016/j.neunet.2024.106346. Epub 2024 Apr 27.
7
Enhancing SNN-based spatio-temporal learning: A benchmark dataset and Cross-Modality Attention model.基于 SNN 的时空学习增强:基准数据集和跨模态注意模型。
Neural Netw. 2024 Dec;180:106677. doi: 10.1016/j.neunet.2024.106677. Epub 2024 Sep 3.
8
HybridSNN: Combining Bio-Machine Strengths by Boosting Adaptive Spiking Neural Networks.HybridSNN:通过提升自适应尖峰神经网络来结合生物机器的优势。
IEEE Trans Neural Netw Learn Syst. 2023 Sep;34(9):5841-5855. doi: 10.1109/TNNLS.2021.3131356. Epub 2023 Sep 1.
9
Optimizing Deeper Spiking Neural Networks for Dynamic Vision Sensing.深度尖峰神经网络在动态视觉传感中的优化。
Neural Netw. 2021 Dec;144:686-698. doi: 10.1016/j.neunet.2021.09.022. Epub 2021 Oct 5.
10
Delay learning based on temporal coding in Spiking Neural Networks.基于尖峰神经网络的时间编码的延迟学习。
Neural Netw. 2024 Dec;180:106678. doi: 10.1016/j.neunet.2024.106678. Epub 2024 Aug 31.