Suppr超能文献

光谱分辨率降低对基于时间相干性的声源分离的影响。

Impact of reduced spectral resolution on temporal-coherence-based source segregation.

作者信息

Viswanathan Vibha, Heinz Michael G, Shinn-Cunningham Barbara G

机构信息

Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA.

Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana 47907, USA.

出版信息

J Acoust Soc Am. 2024 Dec 1;156(6):3862-3876. doi: 10.1121/10.0034545.

Abstract

Hearing-impaired listeners struggle to understand speech in noise, even when using cochlear implants (CIs) or hearing aids. Successful listening in noisy environments depends on the brain's ability to organize a mixture of sound sources into distinct perceptual streams (i.e., source segregation). In normal-hearing listeners, temporal coherence of sound fluctuations across frequency channels supports this process by promoting grouping of elements belonging to a single acoustic source. We hypothesized that reduced spectral resolution-a hallmark of both electric/CI (from current spread) and acoustic (from broadened tuning) hearing with sensorineural hearing loss-degrades segregation based on temporal coherence. This is because reduced frequency resolution decreases the likelihood that a single sound source dominates the activity driving any specific channel; concomitantly, it increases the correlation in activity across channels. Consistent with our hypothesis, our physiologically inspired computational model of temporal-coherence-based segregation predicts that CI current spread reduces comodulation masking release (CMR; a correlate of temporal-coherence processing) and speech intelligibility in noise. These predictions are consistent with our online behavioral data with simulated CI listening. Our model also predicts smaller CMR with increasing levels of outer-hair-cell damage. These results suggest that reduced spectral resolution relative to normal hearing impairs temporal-coherence-based segregation and speech-in-noise outcomes.

摘要

听力受损的听众即便使用人工耳蜗(CI)或助听器,在嘈杂环境中理解言语仍存在困难。在嘈杂环境中成功聆听取决于大脑将混合声源组织成不同感知流(即声源分离)的能力。在听力正常的听众中,跨频率通道的声音波动的时间相干性通过促进属于单个声源的元素分组来支持这一过程。我们推测,频谱分辨率降低——这是感音神经性听力损失导致的电刺激/人工耳蜗(由于电流扩散)和声学(由于调谐展宽)听力的共同特征——会削弱基于时间相干性的分离。这是因为频率分辨率降低会减少单个声源主导驱动任何特定通道的活动的可能性;与此同时,它会增加通道间活动的相关性。与我们的假设一致,我们基于生理学启发的基于时间相干性分离的计算模型预测,人工耳蜗电流扩散会降低共调制掩蔽释放(CMR;时间相干性处理的一个指标)以及噪声环境下的言语可懂度。这些预测与我们模拟人工耳蜗聆听的在线行为数据一致。我们的模型还预测,随着外毛细胞损伤程度的增加,CMR会减小。这些结果表明,相对于正常听力,频谱分辨率降低会损害基于时间相干性的分离以及噪声环境下的言语表现。

相似文献

本文引用的文献

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验