Suppr超能文献

在无法获取分离声源的多说话者环境中对注意力选择进行神经解码。

Neural decoding of attentional selection in multi-speaker environments without access to separated sources.

作者信息

O'Sullivan James, Sheth Sameer A, McKhann Guy, Mehta Ashesh D, Mesgarani Nima

出版信息

Annu Int Conf IEEE Eng Med Biol Soc. 2017 Jul;2017:1644-1647. doi: 10.1109/EMBC.2017.8037155.

Abstract

People who suffer from hearing impairments can find it difficult to follow a conversation in a multi-speaker environment. Modern hearing aids can suppress background noise; however, there is little that can be done to help a user attend to a single conversation without knowing which speaker is being attended to. Cognitively controlled hearing aids that use auditory attention decoding (AAD) methods are the next step in offering help. A number of challenges exist, including the lack of access to the clean sound sources in the environment with which to compare with the neural signals. We propose a novel framework that combines single-channel speech separation algorithms with AAD. We present an end-to-end system that 1) receives a single audio channel containing a mixture of speakers that is heard by a listener along with the listener's neural signals, 2) automatically separates the individual speakers in the mixture, 3) determines the attended speaker, and 4) amplifies the attended speaker's voice to assist the listener. Using invasive electrophysiology recordings, our system is able to decode the attention of a subject and detect switches in attention using only the mixed audio. We also identified the regions of the auditory cortex that contribute to AAD. Our quality assessment of the modified audio demonstrates a significant improvement in both subjective and objective speech quality measures. Our novel framework for AAD bridges the gap between the most recent advancements in speech processing technologies and speech prosthesis research and moves us closer to the development of cognitively controlled hearing aids.

摘要

听力受损的人会发现在多说话者环境中很难跟上对话。现代助听器可以抑制背景噪音;然而,在不知道正在关注哪个说话者的情况下,几乎没有什么办法能帮助用户专注于单一对话。使用听觉注意力解码(AAD)方法的认知控制助听器是提供帮助的下一步。存在许多挑战,包括在环境中无法获取干净的声源来与神经信号进行比较。我们提出了一个将单通道语音分离算法与AAD相结合的新颖框架。我们展示了一个端到端系统,该系统:1)接收包含说话者混合声音的单个音频通道,听众能听到这个通道以及听众的神经信号;2)自动分离混合声音中的各个说话者;3)确定被关注的说话者;4)放大被关注说话者的声音以帮助听众。通过侵入性电生理记录,我们的系统能够仅使用混合音频解码受试者的注意力并检测注意力的切换。我们还确定了对AAD有贡献的听觉皮层区域。我们对修改后音频的质量评估表明,在主观和客观语音质量测量方面都有显著改善。我们新颖的AAD框架弥合了语音处理技术和语音假体研究的最新进展之间的差距,并使我们更接近认知控制助听器的开发。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验