Suppr超能文献

人类对同物种发声的辨别存在时间层次结构。

A temporal hierarchy for conspecific vocalization discrimination in humans.

机构信息

Electroencephalography Brain Mapping Core, Center for Biomedical Imaging, Vaudois University Hospital Center and University of Lausanne, 1011 Lausanne, Switzerland.

出版信息

J Neurosci. 2010 Aug 18;30(33):11210-21. doi: 10.1523/JNEUROSCI.2239-10.2010.

Abstract

The ability to discriminate conspecific vocalizations is observed across species and early during development. However, its neurophysiologic mechanism remains controversial, particularly regarding whether it involves specialized processes with dedicated neural machinery. We identified spatiotemporal brain mechanisms for conspecific vocalization discrimination in humans by applying electrical neuroimaging analyses to auditory evoked potentials (AEPs) in response to acoustically and psychophysically controlled nonverbal human and animal vocalizations as well as sounds of man-made objects. AEP strength modulations in the absence of topographic modulations are suggestive of statistically indistinguishable brain networks. First, responses were significantly stronger, but topographically indistinguishable to human versus animal vocalizations starting at 169-219 ms after stimulus onset and within regions of the right superior temporal sulcus and superior temporal gyrus. This effect correlated with another AEP strength modulation occurring at 291-357 ms that was localized within the left inferior prefrontal and precentral gyri. Temporally segregated and spatially distributed stages of vocalization discrimination are thus functionally coupled and demonstrate how conventional views of functional specialization must incorporate network dynamics. Second, vocalization discrimination is not subject to facilitated processing in time, but instead lags more general categorization by approximately 100 ms, indicative of hierarchical processing during object discrimination. Third, although differences between human and animal vocalizations persisted when analyses were performed at a single-object level or extended to include additional (man-made) sound categories, at no latency were responses to human vocalizations stronger than those to all other categories. Vocalization discrimination transpires at times synchronous with that of face discrimination but is not functionally specialized.

摘要

跨物种和早期发育阶段都能观察到辨别同种动物叫声的能力。然而,其神经生理学机制仍存在争议,尤其是关于它是否涉及具有专用神经机制的专门过程。我们通过对听觉诱发电位(AEPs)应用电神经影像学分析,识别出人类对同种动物叫声进行辨别时的时空大脑机制,这些分析基于对经过声学和心理物理控制的非言语人类和动物叫声以及人造物体声音的反应。在没有地形调制的情况下,AEP 强度调制表明存在统计学上不可区分的大脑网络。首先,在刺激开始后 169-219 毫秒内,以及在右侧颞上回和颞上回区域内,与人类相比,动物的声音在没有地形调制的情况下,反应显著增强但在地形上不可区分。这种效应与另一种在 291-357 毫秒内发生的 AEPs 强度调制相关,该调制定位于左侧下前额叶和中央前回。因此,发声辨别在时间上是分离的,在空间上是分布的,其功能是相互耦合的,展示了传统的功能专业化观点必须如何纳入网络动力学。其次,发声辨别不受时间上的促进处理,而是滞后于一般分类约 100 毫秒,这表明在物体辨别期间存在分层处理。第三,尽管在单一物体水平上进行分析或扩展到包括其他(人造)声音类别时,人类和动物叫声之间的差异仍然存在,但在任何潜伏期内,人类叫声的反应都没有强于所有其他类别。发声辨别与面部辨别同步发生,但没有功能上的专门化。

相似文献

2
Rapid brain discrimination of sounds of objects.大脑对物体声音的快速辨别。
J Neurosci. 2006 Jan 25;26(4):1293-302. doi: 10.1523/JNEUROSCI.4511-05.2006.
7
The role of actions in auditory object discrimination.动作在听觉对象辨别中的作用。
Neuroimage. 2009 Nov 1;48(2):475-85. doi: 10.1016/j.neuroimage.2009.06.041. Epub 2009 Jun 24.

引用本文的文献

8
Cochlea to categories: The spatiotemporal dynamics of semantic auditory representations.耳蜗分类:语义听觉表示的时空动态。
Cogn Neuropsychol. 2021 Oct-Dec;38(7-8):468-489. doi: 10.1080/02643294.2022.2085085. Epub 2022 Jun 21.

本文引用的文献

6
The role of actions in auditory object discrimination.动作在听觉对象辨别中的作用。
Neuroimage. 2009 Nov 1;48(2):475-85. doi: 10.1016/j.neuroimage.2009.06.041. Epub 2009 Jun 24.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验