Electroencephalography Brain Mapping Core, Center for Biomedical Imaging, Vaudois University Hospital Center and University of Lausanne, 1011 Lausanne, Switzerland.
J Neurosci. 2010 Aug 18;30(33):11210-21. doi: 10.1523/JNEUROSCI.2239-10.2010.
The ability to discriminate conspecific vocalizations is observed across species and early during development. However, its neurophysiologic mechanism remains controversial, particularly regarding whether it involves specialized processes with dedicated neural machinery. We identified spatiotemporal brain mechanisms for conspecific vocalization discrimination in humans by applying electrical neuroimaging analyses to auditory evoked potentials (AEPs) in response to acoustically and psychophysically controlled nonverbal human and animal vocalizations as well as sounds of man-made objects. AEP strength modulations in the absence of topographic modulations are suggestive of statistically indistinguishable brain networks. First, responses were significantly stronger, but topographically indistinguishable to human versus animal vocalizations starting at 169-219 ms after stimulus onset and within regions of the right superior temporal sulcus and superior temporal gyrus. This effect correlated with another AEP strength modulation occurring at 291-357 ms that was localized within the left inferior prefrontal and precentral gyri. Temporally segregated and spatially distributed stages of vocalization discrimination are thus functionally coupled and demonstrate how conventional views of functional specialization must incorporate network dynamics. Second, vocalization discrimination is not subject to facilitated processing in time, but instead lags more general categorization by approximately 100 ms, indicative of hierarchical processing during object discrimination. Third, although differences between human and animal vocalizations persisted when analyses were performed at a single-object level or extended to include additional (man-made) sound categories, at no latency were responses to human vocalizations stronger than those to all other categories. Vocalization discrimination transpires at times synchronous with that of face discrimination but is not functionally specialized.
跨物种和早期发育阶段都能观察到辨别同种动物叫声的能力。然而,其神经生理学机制仍存在争议,尤其是关于它是否涉及具有专用神经机制的专门过程。我们通过对听觉诱发电位(AEPs)应用电神经影像学分析,识别出人类对同种动物叫声进行辨别时的时空大脑机制,这些分析基于对经过声学和心理物理控制的非言语人类和动物叫声以及人造物体声音的反应。在没有地形调制的情况下,AEP 强度调制表明存在统计学上不可区分的大脑网络。首先,在刺激开始后 169-219 毫秒内,以及在右侧颞上回和颞上回区域内,与人类相比,动物的声音在没有地形调制的情况下,反应显著增强但在地形上不可区分。这种效应与另一种在 291-357 毫秒内发生的 AEPs 强度调制相关,该调制定位于左侧下前额叶和中央前回。因此,发声辨别在时间上是分离的,在空间上是分布的,其功能是相互耦合的,展示了传统的功能专业化观点必须如何纳入网络动力学。其次,发声辨别不受时间上的促进处理,而是滞后于一般分类约 100 毫秒,这表明在物体辨别期间存在分层处理。第三,尽管在单一物体水平上进行分析或扩展到包括其他(人造)声音类别时,人类和动物叫声之间的差异仍然存在,但在任何潜伏期内,人类叫声的反应都没有强于所有其他类别。发声辨别与面部辨别同步发生,但没有功能上的专门化。