Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, P.O. Box 616, 6200 MD Maastricht, The Netherlands.
Neuroimage. 2012 Sep;62(3):1877-83. doi: 10.1016/j.neuroimage.2012.06.010. Epub 2012 Jun 19.
Understanding the temporal dynamics underlying cortical processing of auditory categories is complicated by difficulties in equating temporal and spectral features across stimulus classes. In the present magnetoencephalography (MEG) study, female voices and cat sounds were filtered so as to match in most of their acoustic properties, and the respective auditory evoked responses were investigated with a paradigm that allowed us to examine auditory cortical processing of two natural sound categories beyond the physical make-up of the stimuli. Three cat or human voice sounds were first presented to establish a categorical context. Subsequently, a probe sound that was congruent, incongruent, or ambiguous to this context was presented. As an index of a categorical mismatch, MEG responses to incongruent sounds were stronger than the responses to congruent sounds at ~250 ms in the right temporoparietal cortex, regardless of the sound category. Furthermore, probe sounds that could not be unambiguously attributed to any of the two categories ("cat" or "voice") evoked stronger responses after the voice than cat context at 200-250 ms, suggesting a stronger contextual effect for human voices. Our results suggest that categorical templates for human and animal vocalizations are established at ~250 ms in the right temporoparietal cortex, likely reflecting continuous online analysis of spectral stimulus features during auditory categorizing task.
理解听觉类别在皮质处理中的时间动态是复杂的,因为在刺激类别之间很难等同时间和频谱特征。在本项脑磁图(MEG)研究中,女性声音和猫叫声经过过滤,以匹配其大部分声学特性,并且使用允许我们研究两种自然声音类别的听觉皮质处理的范式来研究各自的听觉诱发反应,超出了刺激的物理构成。首先呈现三个猫或人类声音,以建立类别上下文。随后,呈现与该上下文一致、不一致或模棱两可的探测声音。作为类别不匹配的指标,MEG 对不一致声音的反应在右颞顶叶皮层中比一致声音的反应强约 250ms,而与声音类别无关。此外,在声音上下文之后,不能明确归因于两个类别(“猫”或“声音”)之一的探测声音在 200-250ms 时引起的反应更强,这表明人类声音的上下文效应更强。我们的结果表明,人类和动物发声的类别模板在右颞顶叶皮层中约 250ms 时建立,可能反映了听觉分类任务中连续在线分析频谱刺激特征。