• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于视听零样本学习的脉冲塔克融合变换器

Spiking Tucker Fusion Transformer for Audio-Visual Zero-Shot Learning.

作者信息

Li Wenrui, Wang Penghong, Xiong Ruiqin, Fan Xiaopeng

出版信息

IEEE Trans Image Process. 2024;33:4840-4852. doi: 10.1109/TIP.2024.3430080. Epub 2024 Sep 5.

DOI:10.1109/TIP.2024.3430080
PMID:39042525
Abstract

The spiking neural networks (SNNs) that efficiently encode temporal sequences have shown great potential in extracting audio-visual joint feature representations. However, coupling SNNs (binary spike sequences) with transformers (float-point sequences) to jointly explore the temporal-semantic information still facing challenges. In this paper, we introduce a novel Spiking Tucker Fusion Transformer (STFT) for audio-visual zero-shot learning (ZSL). The STFT leverage the temporal and semantic information from different time steps to generate robust representations. The time-step factor (TSF) is introduced to dynamically synthesis the subsequent inference information. To guide the formation of input membrane potentials and reduce the spike noise, we propose a global-local pooling (GLP) which combines the max and average pooling operations. Furthermore, the thresholds of the spiking neurons are dynamically adjusted based on semantic and temporal cues. Integrating the temporal and semantic information extracted by SNNs and Transformers are difficult due to the increased number of parameters in a straightforward bilinear model. To address this, we introduce a temporal-semantic Tucker fusion module, which achieves multi-scale fusion of SNN and Transformer outputs while maintaining full second-order interactions. Our experimental results demonstrate the effectiveness of the proposed approach in achieving state-of-the-art performance in three benchmark datasets. The harmonic mean (HM) improvement of VGGSound, UCF101 and ActivityNet are around 15.4%, 3.9%, and 14.9%, respectively.

摘要

能够有效编码时间序列的脉冲神经网络(SNN)在提取视听联合特征表示方面展现出了巨大潜力。然而,将SNN(二进制脉冲序列)与Transformer(浮点序列)耦合以共同探索时间语义信息仍面临挑战。在本文中,我们介绍了一种用于视听零样本学习(ZSL)的新型脉冲塔克融合Transformer(STFT)。STFT利用来自不同时间步的时间和语义信息来生成鲁棒的表示。引入时间步因子(TSF)以动态合成后续推理信息。为了指导输入膜电位的形成并减少脉冲噪声,我们提出了一种结合最大池化和平均池化操作的全局-局部池化(GLP)。此外,脉冲神经元的阈值基于语义和时间线索进行动态调整。由于直接的双线性模型中参数数量增加,整合SNN和Transformer提取的时间和语义信息变得困难。为了解决这个问题,我们引入了一个时间语义塔克融合模块,该模块在保持完整二阶交互的同时实现了SNN和Transformer输出的多尺度融合。我们的实验结果证明了所提出方法在三个基准数据集上实现了领先性能的有效性。VGGSound、UCF101和ActivityNet的调和均值(HM)提升分别约为15.4%、3.9%和14.9%。

相似文献

1
Spiking Tucker Fusion Transformer for Audio-Visual Zero-Shot Learning.用于视听零样本学习的脉冲塔克融合变换器
IEEE Trans Image Process. 2024;33:4840-4852. doi: 10.1109/TIP.2024.3430080. Epub 2024 Sep 5.
2
SGLFormer: Spiking Global-Local-Fusion Transformer with high performance.SGLFormer:具有高性能的脉冲全局-局部融合变压器。
Front Neurosci. 2024 Mar 12;18:1371290. doi: 10.3389/fnins.2024.1371290. eCollection 2024.
3
Multi-scale full spike pattern for semantic segmentation.多尺度全尖峰模式的语义分割。
Neural Netw. 2024 Aug;176:106330. doi: 10.1016/j.neunet.2024.106330. Epub 2024 Apr 20.
4
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.SSTDP:用于高效脉冲神经网络训练的监督式脉冲时间依赖可塑性
Front Neurosci. 2021 Nov 4;15:756876. doi: 10.3389/fnins.2021.756876. eCollection 2021.
5
TransZero++: Cross Attribute-Guided Transformer for Zero-Shot Learning.TransZero++:用于零样本学习的跨属性引导变换器
IEEE Trans Pattern Anal Mach Intell. 2023 Nov;45(11):12844-12861. doi: 10.1109/TPAMI.2022.3229526. Epub 2023 Oct 3.
6
Optimizing Deeper Spiking Neural Networks for Dynamic Vision Sensing.深度尖峰神经网络在动态视觉传感中的优化。
Neural Netw. 2021 Dec;144:686-698. doi: 10.1016/j.neunet.2021.09.022. Epub 2021 Oct 5.
7
Learning long sequences in spiking neural networks.在脉冲神经网络中学习长序列。
Sci Rep. 2024 Sep 20;14(1):21957. doi: 10.1038/s41598-024-71678-8.
8
Enabling Spike-Based Backpropagation for Training Deep Neural Network Architectures.实现基于尖峰的反向传播以训练深度神经网络架构。
Front Neurosci. 2020 Feb 28;14:119. doi: 10.3389/fnins.2020.00119. eCollection 2020.
9
Attention Spiking Neural Networks.关注脉冲神经网络。
IEEE Trans Pattern Anal Mach Intell. 2023 Aug;45(8):9393-9410. doi: 10.1109/TPAMI.2023.3241201. Epub 2023 Jun 30.
10
Sparser spiking activity can be better: Feature Refine-and-Mask spiking neural network for event-based visual recognition.稀疏尖峰活动可以更好:基于事件的视觉识别的特征细化和掩蔽尖峰神经网络。
Neural Netw. 2023 Sep;166:410-423. doi: 10.1016/j.neunet.2023.07.008. Epub 2023 Jul 20.

引用本文的文献

1
Relation-based self-distillation method for 2D object detection.用于二维目标检测的基于关系的自蒸馏方法。
Sci Rep. 2025 Mar 18;15(1):9329. doi: 10.1038/s41598-025-93072-8.