Suppr超能文献

用于稳健语音活动检测的反馈驱动感官映射自适应

Feedback-Driven Sensory Mapping Adaptation for Robust Speech Activity Detection.

作者信息

Bellur Ashwin, Elhilali Mounya

机构信息

Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218 USA.

出版信息

IEEE/ACM Trans Audio Speech Lang Process. 2017 Mar;25(3):481-492. doi: 10.1109/TASLP.2016.2639322. Epub 2016 Dec 13.

Abstract

Parsing natural acoustic scenes using computational methodologies poses many challenges. Given the rich and complex nature of the acoustic environment, data mismatch between train and test conditions is a major hurdle in data-driven audio processing systems. In contrast, the brain exhibits a remarkable ability at segmenting acoustic scenes with relative ease. When tackling challenging listening conditions that are often faced in everyday life, the biological system relies on a number of principles that allow it to effortlessly parse its rich soundscape. In the current study, we leverage a key principle employed by the auditory system: its ability to adapt the neural representation of its sensory input in a high-dimensional space. We propose a framework that mimics this process in a computational model for robust speech activity detection. The system employs a 2-D Gabor filter bank whose parameters are retuned offline to improve the separability between the feature representation of speech and nonspeech sounds. This retuning process, driven by feedback from statistical models of speech and nonspeech classes, attempts to minimize the misclassification risk of mismatched data, with respect to the original statistical models. We hypothesize that this risk minimization procedure results in an emphasis of unique speech and nonspeech modulations in the high-dimensional space. We show that such an adapted system is indeed robust to other novel conditions, with a marked reduction in equal error rates for a variety of databases with additive and convolutive noise distortions. We discuss the lessons learned from biology with regard to adapting to an ever-changing acoustic environment and the impact on building truly intelligent audio processing systems.

摘要

使用计算方法解析自然声学场景面临诸多挑战。鉴于声学环境丰富且复杂的特性,训练和测试条件之间的数据不匹配是数据驱动音频处理系统中的一个主要障碍。相比之下,大脑在相对轻松地分割声学场景方面展现出非凡能力。在应对日常生活中经常遇到的具有挑战性的聆听条件时,生物系统依赖于一些原则,使其能够毫不费力地解析其丰富的音景。在当前研究中,我们利用了听觉系统所采用的一个关键原则:其在高维空间中调整感觉输入神经表征的能力。我们提出了一个框架,在用于稳健语音活动检测的计算模型中模仿这一过程。该系统采用二维伽柏滤波器组,其参数离线重新调整,以提高语音和非语音声音特征表示之间的可分离性。这个重新调整过程由语音和非语音类别的统计模型反馈驱动,试图相对于原始统计模型将不匹配数据的误分类风险降至最低。我们假设这种风险最小化过程会导致在高维空间中突出独特的语音和非语音调制。我们表明,这样一个经过调整的系统确实对其他新条件具有鲁棒性,对于各种具有加性和卷积噪声失真的数据库,其等错误率显著降低。我们讨论了从生物学中学到的关于适应不断变化的声学环境的经验教训以及对构建真正智能音频处理系统的影响。

相似文献

1
Feedback-Driven Sensory Mapping Adaptation for Robust Speech Activity Detection.用于稳健语音活动检测的反馈驱动感官映射自适应
IEEE/ACM Trans Audio Speech Lang Process. 2017 Mar;25(3):481-492. doi: 10.1109/TASLP.2016.2639322. Epub 2016 Dec 13.
2
A Framework for Speech Activity Detection Using Adaptive Auditory Receptive Fields.一种使用自适应听觉感受野的语音活动检测框架。
IEEE/ACM Trans Audio Speech Lang Process. 2015 Dec;23(12):2422-2433. doi: 10.1109/TASLP.2015.2481179. Epub 2015 Sep 23.

引用本文的文献

1
Audio object classification using distributed beliefs and attention.基于分布式信念和注意力的音频对象分类
IEEE/ACM Trans Audio Speech Lang Process. 2020;28:729-739. doi: 10.1109/taslp.2020.2966867. Epub 2020 Jan 15.
3
Modelling auditory attention.模拟听觉注意力。
Philos Trans R Soc Lond B Biol Sci. 2017 Feb 19;372(1714). doi: 10.1098/rstb.2016.0101. Epub 2017 Jan 2.

本文引用的文献

2
Object recognition with hierarchical discriminant saliency networks.基于分层判别显著网络的目标识别。
Front Comput Neurosci. 2014 Sep 9;8:109. doi: 10.3389/fncom.2014.00109. eCollection 2014.
4
Adaptive auditory computations.自适应听觉计算。
Curr Opin Neurobiol. 2014 Apr;25:164-8. doi: 10.1016/j.conb.2014.01.011. Epub 2014 Feb 11.
6
Music in our ears: the biological bases of musical timbre perception.音乐在我们耳边:音乐音色感知的生物学基础。
PLoS Comput Biol. 2012;8(11):e1002759. doi: 10.1371/journal.pcbi.1002759. Epub 2012 Nov 1.
8
An object-based visual attention model for robotic applications.一种用于机器人应用的基于对象的视觉注意力模型。
IEEE Trans Syst Man Cybern B Cybern. 2010 Oct;40(5):1398-412. doi: 10.1109/TSMCB.2009.2038895. Epub 2010 Feb 2.
10
Object-based auditory and visual attention.基于对象的听觉和视觉注意力。
Trends Cogn Sci. 2008 May;12(5):182-6. doi: 10.1016/j.tics.2008.02.003. Epub 2008 Apr 7.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验