Suppr超能文献

用于语音增强的三阶段混合脉冲神经网络微调

Three-stage hybrid spiking neural networks fine-tuning for speech enhancement.

作者信息

Abuhajar Nidal, Wang Zhewei, Baltes Marc, Yue Ye, Xu Li, Karanth Avinash, Smith Charles D, Liu Jundong

机构信息

School of Electrical Engineering and Computer Science, Ohio University, Athens, OH, United States.

Department of Hearing, Speech, and Language Sciences, Ohio University, Athens, OH, United States.

出版信息

Front Neurosci. 2025 Apr 30;19:1567347. doi: 10.3389/fnins.2025.1567347. eCollection 2025.

Abstract

INTRODUCTION

In the past decade, artificial neural networks (ANNs) have revolutionized many AI-related fields, including Speech Enhancement (SE). However, achieving high performance with ANNs often requires substantial power and memory resources. Recently, spiking neural networks (SNNs) have emerged as a promising low-power alternative to ANNs, leveraging their inherent sparsity to enable efficient computation while maintaining performance.

METHOD

While SNNs offer improved energy efficiency, they are generally more challenging to train compared to ANNs. In this study, we propose a three-stage hybrid ANN-to-SNN fine-tuning scheme and apply it to Wave-U-Net and ConvTasNet, two major network solutions for speech enhancement. Our framework first trains the ANN models, followed by converting them into their corresponding spiking versions. The converted SNNs are subsequently fine-tuned with a hybrid training scheme, where the forward pass uses spiking signals and the backward pass uses ANN signals to enable backpropagation. In order to maintain the performance of the original ANN models, various modifications to the original network architectures have been made. Our SNN models operate entirely in the temporal domain, eliminating the need to convert wave signals into the spectral domain for input and back to the waveform for output. Moreover, our models uniquely utilize spiking neurons, setting them apart from many models that incorporate regular ANN neurons in their architectures.

RESULTS AND DISCUSSION

Experiments on noisy VCTK and TIMIT datasets demonstrate the effectiveness of the hybrid training, where the fine-tuned SNNs show significant improvement and robustness over the baseline models.

摘要

引言

在过去十年中,人工神经网络(ANN)彻底改变了许多与人工智能相关的领域,包括语音增强(SE)。然而,要使ANN实现高性能通常需要大量的功率和内存资源。最近,脉冲神经网络(SNN)作为一种有前途的低功耗替代方案出现,利用其固有的稀疏性在保持性能的同时实现高效计算。

方法

虽然SNN提供了更高的能源效率,但与ANN相比,它们通常更难训练。在本研究中,我们提出了一种三阶段的混合ANN到SNN微调方案,并将其应用于Wave-U-Net和ConvTasNet这两种语音增强的主要网络解决方案。我们的框架首先训练ANN模型,然后将它们转换为相应的脉冲版本。随后,使用混合训练方案对转换后的SNN进行微调,其中前向传播使用脉冲信号,反向传播使用ANN信号以实现反向传播。为了保持原始ANN模型的性能,对原始网络架构进行了各种修改。我们的SNN模型完全在时域中运行,无需将波形信号转换为频谱域进行输入,再转换回波形进行输出。此外,我们的模型独特地使用了脉冲神经元,这使它们与许多在架构中包含常规ANN神经元的模型有所不同。

结果与讨论

在有噪声的VCTK和TIMIT数据集上的实验证明了混合训练的有效性,其中经过微调的SNN相对于基线模型显示出显著的改进和鲁棒性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b5ff/12075214/b643c4d6c2ec/fnins-19-1567347-g0001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验