• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
On Cross-Corpus Generalization of Deep Learning Based Speech Enhancement.基于深度学习的语音增强跨语料库泛化研究
IEEE/ACM Trans Audio Speech Lang Process. 2020;28:2489-2499. doi: 10.1109/taslp.2020.3016487. Epub 2020 Aug 14.
2
Self-attending RNN for Speech Enhancement to Improve Cross-corpus Generalization.用于语音增强以提高跨语料库泛化能力的自关注循环神经网络
IEEE/ACM Trans Audio Speech Lang Process. 2022;30:1374-1385. doi: 10.1109/taslp.2022.3161143. Epub 2022 Mar 22.
3
Long short-term memory for speaker generalization in supervised speech separation.用于监督语音分离中说话人泛化的长短期记忆网络
J Acoust Soc Am. 2017 Jun;141(6):4705. doi: 10.1121/1.4986931.
4
Large-scale training to increase speech intelligibility for hearing-impaired listeners in novel noises.大规模训练以提高听力受损者在新型噪声环境下的言语可懂度。
J Acoust Soc Am. 2016 May;139(5):2604. doi: 10.1121/1.4948445.
5
End-to-End Deep Convolutional Recurrent Models for Noise Robust Waveform Speech Enhancement.端到端深度卷积递归模型在抗噪波形语音增强中的应用。
Sensors (Basel). 2022 Oct 13;22(20):7782. doi: 10.3390/s22207782.
6
Gated Residual Networks with Dilated Convolutions for Monaural Speech Enhancement.用于单声道语音增强的带扩张卷积的门控残差网络
IEEE/ACM Trans Audio Speech Lang Process. 2019 Jan;27(1):189-198. doi: 10.1109/TASLP.2018.2876171. Epub 2018 Oct 15.
7
An Optimal Transport Analysis on Generalization in Deep Learning.深度学习中的泛化的最优传输分析。
IEEE Trans Neural Netw Learn Syst. 2023 Jun;34(6):2842-2853. doi: 10.1109/TNNLS.2021.3109942. Epub 2023 Jun 1.
8
CNN-based noise reduction for multi-channel speech enhancement system with discrete wavelet transform (DWT) preprocessing.基于卷积神经网络(CNN)的多通道语音增强系统的降噪方法,采用离散小波变换(DWT)预处理。
PeerJ Comput Sci. 2024 Feb 28;10:e1901. doi: 10.7717/peerj-cs.1901. eCollection 2024.
9
Staining Invariant Features for Improving Generalization of Deep Convolutional Neural Networks in Computational Pathology.用于提高计算病理学中深度卷积神经网络泛化能力的染色不变特征
Front Bioeng Biotechnol. 2019 Aug 23;7:198. doi: 10.3389/fbioe.2019.00198. eCollection 2019.
10
A Novel Speech Intelligibility Enhancement Model based on Canonical Correlation and Deep Learning.基于典范相关分析和深度学习的新型语音可懂度增强模型。
Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:2581-2584. doi: 10.1109/EMBC48229.2022.9871113.

引用本文的文献

1
Multichannel speech enhancement for automatic speech recognition: a literature review.用于自动语音识别的多通道语音增强:文献综述
PeerJ Comput Sci. 2025 Mar 27;11:e2772. doi: 10.7717/peerj-cs.2772. eCollection 2025.
2
Improved tactile speech robustness to background noise with a dual-path recurrent neural network noise-reduction method.基于双路径递归神经网络降噪方法提高触觉语音对背景噪声的鲁棒性。
Sci Rep. 2024 Mar 28;14(1):7357. doi: 10.1038/s41598-024-57312-7.
3
Deep causal speech enhancement and recognition using efficient long-short term memory Recurrent Neural Network.利用高效长短时记忆递归神经网络进行深度因果语音增强和识别。
PLoS One. 2024 Jan 3;19(1):e0291240. doi: 10.1371/journal.pone.0291240. eCollection 2024.
4
Progress made in the efficacy and viability of deep-learning-based noise reduction.基于深度学习的降噪功效和可行性的进展。
J Acoust Soc Am. 2023 May 1;153(5):2751. doi: 10.1121/10.0019341.
5
Self-attending RNN for Speech Enhancement to Improve Cross-corpus Generalization.用于语音增强以提高跨语料库泛化能力的自关注循环神经网络
IEEE/ACM Trans Audio Speech Lang Process. 2022;30:1374-1385. doi: 10.1109/taslp.2022.3161143. Epub 2022 Mar 22.
6
A causal and talker-independent speaker separation/dereverberation deep learning algorithm: Cost associated with conversion to real-time capable operation.一种因果且与说话人无关的说话人分离/去混响深度学习算法:转换为实时运行能力的相关成本。
J Acoust Soc Am. 2021 Nov;150(5):3976. doi: 10.1121/10.0007134.
7
Deep learning based speaker separation and dereverberation can generalize across different languages to improve intelligibility.基于深度学习的说话人分离和去混响可以跨不同语言进行泛化,以提高可懂度。
J Acoust Soc Am. 2021 Oct;150(4):2526. doi: 10.1121/10.0006565.
8
Towards Robust Speech Super-resolution.迈向稳健的语音超分辨率
IEEE/ACM Trans Audio Speech Lang Process. 2021;29:2058-2066. doi: 10.1109/taslp.2021.3054302. Epub 2021 Jan 25.
9
Dense CNN with Self-Attention for Time-Domain Speech Enhancement.用于时域语音增强的带自注意力机制的密集卷积神经网络
IEEE/ACM Trans Audio Speech Lang Process. 2021;29:1270-1279. doi: 10.1109/taslp.2021.3064421. Epub 2021 Mar 8.

本文引用的文献

1
A New Framework for CNN-Based Speech Enhancement in the Time Domain.基于卷积神经网络的时域语音增强新框架。
IEEE/ACM Trans Audio Speech Lang Process. 2019 Jul;27(7):1179-1188. doi: 10.1109/taslp.2019.2913512. Epub 2019 Apr 29.
2
Learning Complex Spectral Mapping with Gated Convolutional Recurrent Networks for Monaural Speech Enhancement.使用门控卷积递归网络学习复杂频谱映射以实现单声道语音增强
IEEE/ACM Trans Audio Speech Lang Process. 2020;28:380-390. doi: 10.1109/taslp.2019.2955276. Epub 2019 Nov 22.
3
Gated Residual Networks with Dilated Convolutions for Monaural Speech Enhancement.用于单声道语音增强的带扩张卷积的门控残差网络
IEEE/ACM Trans Audio Speech Lang Process. 2019 Jan;27(1):189-198. doi: 10.1109/TASLP.2018.2876171. Epub 2018 Oct 15.
4
Supervised Speech Separation Based on Deep Learning: An Overview.基于深度学习的监督语音分离:综述
IEEE/ACM Trans Audio Speech Lang Process. 2018 Oct;26(10):1702-1726. doi: 10.1109/TASLP.2018.2842159. Epub 2018 May 30.
5
Two-stage Deep Learning for Noisy-reverberant Speech Enhancement.用于噪声混响语音增强的两阶段深度学习
IEEE/ACM Trans Audio Speech Lang Process. 2019 Jan;27(1):53-62. doi: 10.1109/TASLP.2018.2870725. Epub 2018 Sep 17.
6
DEEP CLUSTERING AND CONVENTIONAL NETWORKS FOR MUSIC SEPARATION: STRONGER TOGETHER.用于音乐分离的深度聚类与传统网络:携手共进,力量更强。
Proc IEEE Int Conf Acoust Speech Signal Process. 2017 Mar;2017:61-65. doi: 10.1109/ICASSP.2017.7952118. Epub 2017 Jun 19.
7
Long short-term memory for speaker generalization in supervised speech separation.用于监督语音分离中说话人泛化的长短期记忆网络
J Acoust Soc Am. 2017 Jun;141(6):4705. doi: 10.1121/1.4986931.
8
Large-scale training to increase speech intelligibility for hearing-impaired listeners in novel noises.大规模训练以提高听力受损者在新型噪声环境下的言语可懂度。
J Acoust Soc Am. 2016 May;139(5):2604. doi: 10.1121/1.4948445.
9
Complex Ratio Masking for Monaural Speech Separation.用于单声道语音分离的复比掩蔽
IEEE/ACM Trans Audio Speech Lang Process. 2016 Mar;24(3):483-492. doi: 10.1109/TASLP.2015.2512042. Epub 2015 Dec 23.
10
On Training Targets for Supervised Speech Separation.论监督语音分离的训练目标
IEEE/ACM Trans Audio Speech Lang Process. 2014 Dec;22(12):1849-1858. doi: 10.1109/TASLP.2014.2352935.

基于深度学习的语音增强跨语料库泛化研究

On Cross-Corpus Generalization of Deep Learning Based Speech Enhancement.

作者信息

Pandey Ashutosh, Wang DeLiang

机构信息

Department of Computer Science and Engineering, The Ohio State University, Columbus, OH 43210 USA.

Department of Computer Science and Engineering and the Center for Cognitive and Brain Sciences, The Ohio State University, Columbus, OH 43210 USA.

出版信息

IEEE/ACM Trans Audio Speech Lang Process. 2020;28:2489-2499. doi: 10.1109/taslp.2020.3016487. Epub 2020 Aug 14.

DOI:10.1109/taslp.2020.3016487
PMID:33748327
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7971413/
Abstract

In recent years, supervised approaches using deep neural networks (DNNs) have become the mainstream for speech enhancement. It has been established that DNNs generalize well to untrained noises and speakers if trained using a large number of noises and speakers. However, we find that DNNs fail to generalize to new speech corpora in low signal-to-noise ratio (SNR) conditions. In this work, we establish that the lack of generalization is mainly due to the channel mismatch, i.e. different recording conditions between the trained and untrained corpus. Additionally, we observe that traditional channel normalization techniques are not effective in improving cross-corpus generalization. Further, we evaluate publicly available datasets that are promising for generalization. We find one particular corpus to be significantly better than others. Finally, we find that using a smaller frame shift in short-time processing of speech can significantly improve cross-corpus generalization. The proposed techniques to address cross-corpus generalization include channel normalization, better training corpus, and smaller frame shift in short-time Fourier transform (STFT). These techniques together improve the objective intelligibility and quality scores on untrained corpora significantly.

摘要

近年来,使用深度神经网络(DNN)的监督方法已成为语音增强的主流。已经证实,如果使用大量噪声和说话者进行训练,DNN能够很好地推广到未训练的噪声和说话者。然而,我们发现DNN在低信噪比(SNR)条件下无法推广到新的语音语料库。在这项工作中,我们确定缺乏泛化能力主要是由于通道失配,即训练语料库和未训练语料库之间的不同录音条件。此外,我们观察到传统的通道归一化技术在改善跨语料库泛化方面并不有效。此外,我们评估了有望实现泛化的公开可用数据集。我们发现一个特定的语料库明显优于其他语料库。最后,我们发现在语音的短时处理中使用较小的帧移可以显著提高跨语料库的泛化能力。所提出的解决跨语料库泛化的技术包括通道归一化、更好的训练语料库以及短时傅里叶变换(STFT)中较小的帧移。这些技术共同显著提高了未训练语料库上的客观可懂度和质量得分。