• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

傅里叶或小波基作为Spikformer中对应自注意力机制用于高效视觉分类。

Fourier or Wavelet bases as counterpart self-attention in spikformer for efficient visual classification.

作者信息

Wang Qingyu, Zhang Duzhen, Cai Xinyuan, Zhang Tielin, Xu Bo

机构信息

Institute of Automation, Chinese Academy of Sciences, Beijing, China.

School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.

出版信息

Front Neurosci. 2025 Jan 29;18:1516868. doi: 10.3389/fnins.2024.1516868. eCollection 2024.

DOI:10.3389/fnins.2024.1516868
PMID:39944522
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11814459/
Abstract

Energy-efficient spikformer has been proposed by integrating the biologically plausible spiking neural network (SNN) and artificial transformer, whereby the spiking self-attention (SSA) is used to achieve both higher accuracy and lower computational cost. However, it seems that self-attention is not always necessary, especially in sparse spike-form calculation manners. In this article, we innovatively replace vanilla SSA (using dynamic bases calculating from Query and Key) with spike-form Fourier transform, wavelet transform, and their combinations (using fixed triangular or wavelets bases), based on a key hypothesis that both of them use a set of basis functions for information transformation. Hence, the Fourier-or-Wavelet-based spikformer (FWformer) is proposed and verified in visual classification tasks, including both static image and event-based video datasets. The FWformer can achieve comparable or even higher accuracies (0.4%-1.5%), higher running speed (9%-51% for training and 19%-70% for inference), reduced theoretical energy consumption (20%-25%), and reduced graphic processing unit (GPU) memory usage (4%-26%), compared to the standard spikformer. Our result indicates the continuous refinement of new transformers that are inspired either by biological discovery (spike-form), or information theory (Fourier or Wavelet transform), is promising.

摘要

通过将具有生物学合理性的脉冲神经网络(SNN)与人工变压器集成,提出了节能型脉冲former,其中脉冲自注意力(SSA)用于实现更高的准确性和更低的计算成本。然而,自注意力似乎并非总是必要的,特别是在稀疏脉冲形式的计算方式中。在本文中,基于一个关键假设,即它们都使用一组基函数进行信息变换,我们创新性地用脉冲形式的傅里叶变换、小波变换及其组合(使用固定的三角或小波基)取代了普通的SSA(使用从查询和键计算出的动态基)。因此,提出了基于傅里叶或小波的脉冲former(FWformer),并在视觉分类任务中进行了验证,包括静态图像和基于事件的视频数据集。与标准脉冲former相比,FWformer可以实现相当甚至更高的准确率(0.4%-1.5%)、更高的运行速度(训练时提高9%-51%,推理时提高19%-70%)、降低理论能耗(20%-25%)以及减少图形处理单元(GPU)内存使用量(4%-26%)。我们的结果表明,受生物学发现(脉冲形式)或信息理论(傅里叶或小波变换)启发的新型变压器的不断完善是有前景的。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b64/11814459/4fe418fb481d/fnins-18-1516868-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b64/11814459/e2261ce211bd/fnins-18-1516868-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b64/11814459/4fe418fb481d/fnins-18-1516868-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b64/11814459/e2261ce211bd/fnins-18-1516868-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b64/11814459/4fe418fb481d/fnins-18-1516868-g0002.jpg

相似文献

1
Fourier or Wavelet bases as counterpart self-attention in spikformer for efficient visual classification.傅里叶或小波基作为Spikformer中对应自注意力机制用于高效视觉分类。
Front Neurosci. 2025 Jan 29;18:1516868. doi: 10.3389/fnins.2024.1516868. eCollection 2024.
2
Auto-Spikformer: Spikformer architecture search.自动Spikformer:Spikformer架构搜索
Front Neurosci. 2024 Jul 23;18:1372257. doi: 10.3389/fnins.2024.1372257. eCollection 2024.
3
Spike-HAR++: an energy-efficient and lightweight parallel spiking transformer for event-based human action recognition.Spike-HAR++:一种用于基于事件的人类动作识别的高效节能轻量级并行脉冲变压器。
Front Comput Neurosci. 2024 Nov 26;18:1508297. doi: 10.3389/fncom.2024.1508297. eCollection 2024.
4
SpikeAtConv: an integrated spiking-convolutional attention architecture for energy-efficient neuromorphic vision processing.SpikeAtConv:一种用于节能神经形态视觉处理的集成脉冲卷积注意力架构。
Front Neurosci. 2025 Mar 12;19:1536771. doi: 10.3389/fnins.2025.1536771. eCollection 2025.
5
SpQuant-SNN: ultra-low precision membrane potential with sparse activations unlock the potential of on-device spiking neural networks applications.SpQuant-SNN:具有稀疏激活的超低精度膜电位开启了片上脉冲神经网络应用的潜力。
Front Neurosci. 2024 Sep 4;18:1440000. doi: 10.3389/fnins.2024.1440000. eCollection 2024.
6
Spike-Based Approximate Backpropagation Algorithm of Brain-Inspired Deep SNN for Sonar Target Classification.基于脑启发深度 SNN 的声纳目标分类的尖峰近似反向传播算法。
Comput Intell Neurosci. 2022 Oct 20;2022:1633946. doi: 10.1155/2022/1633946. eCollection 2022.
7
LDD: High-Precision Training of Deep Spiking Neural Network Transformers Guided by an Artificial Neural Network.LDD:基于人工神经网络引导的深度脉冲神经网络变压器的高精度训练
Biomimetics (Basel). 2024 Jul 6;9(7):413. doi: 10.3390/biomimetics9070413.
8
Using a Low-Power Spiking Continuous Time Neuron (SCTN) for Sound Signal Processing.使用低功耗尖峰连续时间神经元(SCTN)进行声音信号处理。
Sensors (Basel). 2021 Feb 4;21(4):1065. doi: 10.3390/s21041065.
9
Attention Spiking Neural Networks.关注脉冲神经网络。
IEEE Trans Pattern Anal Mach Intell. 2023 Aug;45(8):9393-9410. doi: 10.1109/TPAMI.2023.3241201. Epub 2023 Jun 30.
10
Spiking-PhysFormer: Camera-based remote photoplethysmography with parallel spike-driven transformer.Spiking-PhysFormer:基于摄像头的并行脉冲驱动变压器远程光电容积脉搏波描记法
Neural Netw. 2025 May;185:107128. doi: 10.1016/j.neunet.2025.107128. Epub 2025 Jan 10.

引用本文的文献

1
A data-centric and interpretable EEG framework for depression severity grading using SHAP-based insights.一种以数据为中心且可解释的脑电图框架,用于基于SHAP见解的抑郁症严重程度分级。
J Neuroeng Rehabil. 2025 May 25;22(1):116. doi: 10.1186/s12984-025-01645-5.
2
Retinal vessel density and cognitive function in healthy older adults.健康老年人的视网膜血管密度与认知功能
Exp Brain Res. 2025 Apr 15;243(5):114. doi: 10.1007/s00221-025-07076-x.
3
Evaluation of a low-cost portable NIRS device for monitoring muscle ischemia.用于监测肌肉缺血的低成本便携式近红外光谱仪的评估

本文引用的文献

1
SpikingJelly: An open-source machine learning infrastructure platform for spike-based intelligence.SpikingJelly:一个用于基于尖峰的智能的开源机器学习基础架构平台。
Sci Adv. 2023 Oct 6;9(40):eadi1480. doi: 10.1126/sciadv.adi1480.
2
Spiking Deep Residual Networks.尖峰深度残差网络
IEEE Trans Neural Netw Learn Syst. 2023 Aug;34(8):5200-5205. doi: 10.1109/TNNLS.2021.3119238. Epub 2023 Aug 4.
3
Self-backpropagation of synaptic modifications elevates the efficiency of spiking and artificial neural networks.突触修饰的自反向传播提高了脉冲神经网络和人工神经网络的效率。
J Clin Monit Comput. 2025 Apr;39(2):459-468. doi: 10.1007/s10877-024-01226-2. Epub 2024 Oct 2.
Sci Adv. 2021 Oct 22;7(43):eabh0146. doi: 10.1126/sciadv.abh0146. Epub 2021 Oct 20.
4
Optimizing Deeper Spiking Neural Networks for Dynamic Vision Sensing.深度尖峰神经网络在动态视觉传感中的优化。
Neural Netw. 2021 Dec;144:686-698. doi: 10.1016/j.neunet.2021.09.022. Epub 2021 Oct 5.
5
DIET-SNN: A Low-Latency Spiking Neural Network With Direct Input Encoding and Leakage and Threshold Optimization.DIET-SNN:一种具有直接输入编码以及泄漏和阈值优化的低延迟脉冲神经网络。
IEEE Trans Neural Netw Learn Syst. 2023 Jun;34(6):3174-3182. doi: 10.1109/TNNLS.2021.3111897. Epub 2023 Jun 1.
6
LIAF-Net: Leaky Integrate and Analog Fire Network for Lightweight and Efficient Spatiotemporal Information Processing.LIAF-Net:用于轻量级和高效时空信息处理的漏积分和模拟火灾网络。
IEEE Trans Neural Netw Learn Syst. 2022 Nov;33(11):6249-6262. doi: 10.1109/TNNLS.2021.3073016. Epub 2022 Oct 27.
7
Synaptic Plasticity Dynamics for Deep Continuous Local Learning (DECOLLE).深度连续局部学习(DECOLLE)的突触可塑性动力学
Front Neurosci. 2020 May 12;14:424. doi: 10.3389/fnins.2020.00424. eCollection 2020.
8
Efficient Processing of Spatio-Temporal Data Streams With Spiking Neural Networks.基于脉冲神经网络的时空数据流高效处理
Front Neurosci. 2020 May 5;14:439. doi: 10.3389/fnins.2020.00439. eCollection 2020.
9
Enabling Spike-Based Backpropagation for Training Deep Neural Network Architectures.实现基于尖峰的反向传播以训练深度神经网络架构。
Front Neurosci. 2020 Feb 28;14:119. doi: 10.3389/fnins.2020.00119. eCollection 2020.
10
Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks.用于训练高性能脉冲神经网络的时空反向传播
Front Neurosci. 2018 May 23;12:331. doi: 10.3389/fnins.2018.00331. eCollection 2018.