Suppr超能文献

视觉语音识别机制可灵活适应听觉噪声水平。

Visual mechanisms for voice-identity recognition flexibly adjust to auditory noise level.

机构信息

Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.

Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.

出版信息

Hum Brain Mapp. 2021 Aug 15;42(12):3963-3982. doi: 10.1002/hbm.25532. Epub 2021 May 27.

Abstract

Recognising the identity of voices is a key ingredient of communication. Visual mechanisms support this ability: recognition is better for voices previously learned with their corresponding face (compared to a control condition). This so-called 'face-benefit' is supported by the fusiform face area (FFA), a region sensitive to facial form and identity. Behavioural findings indicate that the face-benefit increases in noisy listening conditions. The neural mechanisms for this increase are unknown. Here, using functional magnetic resonance imaging, we examined responses in face-sensitive regions while participants recognised the identity of auditory-only speakers (previously learned by face) in high (SNR -4 dB) and low (SNR +4 dB) levels of auditory noise. We observed a face-benefit in both noise levels, for most participants (16 of 21). In high-noise, the recognition of face-learned speakers engaged the right posterior superior temporal sulcus motion-sensitive face area (pSTS-mFA), a region implicated in the processing of dynamic facial cues. The face-benefit in high-noise also correlated positively with increased functional connectivity between this region and voice-sensitive regions in the temporal lobe in the group of 16 participants with a behavioural face-benefit. In low-noise, the face-benefit was robustly associated with increased responses in the FFA and to a lesser extent the right pSTS-mFA. The findings highlight the remarkably adaptive nature of the visual network supporting voice-identity recognition in auditory-only listening conditions.

摘要

识别声音的身份是交流的关键要素。视觉机制支持这种能力:与对照条件相比,对先前与其相应面孔一起学习的声音的识别效果更好(recognition is better for voices previously learned with their corresponding face (compared to a control condition).)。这种所谓的“面孔优势”(face-benefit)得到了梭状回面孔区(FFA)的支持,该区域对面部形态和身份敏感。行为研究结果表明,在嘈杂的聆听条件下,面孔优势会增加。这种增加的神经机制尚不清楚。在这里,我们使用功能磁共振成像,在参与者在高(SNR-4dB)和低(SNR+4dB)水平的听觉噪声中识别仅听觉说话者(先前通过面孔学习)时,检查了面孔敏感区域的反应。我们观察到在两种噪声水平下,大多数参与者(21 名中的 16 名)都存在面孔优势。在高噪声中,面孔学习说话者的识别会激发右侧后颞上沟运动敏感的面孔区域(pSTS-mFA),该区域与动态面部线索的处理有关。在具有行为面孔优势的 16 名参与者中,该区域与颞叶中与声音敏感区域之间的功能连接增加呈正相关,这与高噪声中的面孔优势相关。在低噪声中,面孔优势与 FFA 中的反应增加以及右 pSTS-mFA 中反应增加都有很强的相关性。这些发现突出了视觉网络在仅听觉聆听条件下支持声音身份识别的惊人适应性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/18fa/8288083/8bb46610b33f/HBM-42-3963-g003.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验