Li Wenrui, Wang Penghong, Xiong Ruiqin, Fan Xiaopeng
IEEE Trans Image Process. 2024;33:4840-4852. doi: 10.1109/TIP.2024.3430080. Epub 2024 Sep 5.
The spiking neural networks (SNNs) that efficiently encode temporal sequences have shown great potential in extracting audio-visual joint feature representations. However, coupling SNNs (binary spike sequences) with transformers (float-point sequences) to jointly explore the temporal-semantic information still facing challenges. In this paper, we introduce a novel Spiking Tucker Fusion Transformer (STFT) for audio-visual zero-shot learning (ZSL). The STFT leverage the temporal and semantic information from different time steps to generate robust representations. The time-step factor (TSF) is introduced to dynamically synthesis the subsequent inference information. To guide the formation of input membrane potentials and reduce the spike noise, we propose a global-local pooling (GLP) which combines the max and average pooling operations. Furthermore, the thresholds of the spiking neurons are dynamically adjusted based on semantic and temporal cues. Integrating the temporal and semantic information extracted by SNNs and Transformers are difficult due to the increased number of parameters in a straightforward bilinear model. To address this, we introduce a temporal-semantic Tucker fusion module, which achieves multi-scale fusion of SNN and Transformer outputs while maintaining full second-order interactions. Our experimental results demonstrate the effectiveness of the proposed approach in achieving state-of-the-art performance in three benchmark datasets. The harmonic mean (HM) improvement of VGGSound, UCF101 and ActivityNet are around 15.4%, 3.9%, and 14.9%, respectively.
能够有效编码时间序列的脉冲神经网络(SNN)在提取视听联合特征表示方面展现出了巨大潜力。然而,将SNN(二进制脉冲序列)与Transformer(浮点序列)耦合以共同探索时间语义信息仍面临挑战。在本文中,我们介绍了一种用于视听零样本学习(ZSL)的新型脉冲塔克融合Transformer(STFT)。STFT利用来自不同时间步的时间和语义信息来生成鲁棒的表示。引入时间步因子(TSF)以动态合成后续推理信息。为了指导输入膜电位的形成并减少脉冲噪声,我们提出了一种结合最大池化和平均池化操作的全局-局部池化(GLP)。此外,脉冲神经元的阈值基于语义和时间线索进行动态调整。由于直接的双线性模型中参数数量增加,整合SNN和Transformer提取的时间和语义信息变得困难。为了解决这个问题,我们引入了一个时间语义塔克融合模块,该模块在保持完整二阶交互的同时实现了SNN和Transformer输出的多尺度融合。我们的实验结果证明了所提出方法在三个基准数据集上实现了领先性能的有效性。VGGSound、UCF101和ActivityNet的调和均值(HM)提升分别约为15.4%、3.9%和14.9%。