Donders Institute for Brain, Cognition, and Behaviour, Radboud University, 6500 HB Nijmegen, The Netherlands; Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands; Department of Psychology, Neurolinguistics, University of Zurich, 8050 Zurich, Switzerland; Department of Comparative Language Science, Evolutionary Neuroscience of Language, University of Zurich, 8050 Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and Eidgenössische Technische Hochschule Zurich, 8057 Zurich, Switzerland.
Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 ER Maastricht, The Netherlands.
Neuroimage. 2022 Sep;258:119375. doi: 10.1016/j.neuroimage.2022.119375. Epub 2022 Jun 11.
Which processes in the human brain lead to the categorical perception of speech sounds? Investigation of this question is hampered by the fact that categorical speech perception is normally confounded by acoustic differences in the stimulus. By using ambiguous sounds, however, it is possible to dissociate acoustic from perceptual stimulus representations. Twenty-seven normally hearing individuals took part in an fMRI study in which they were presented with an ambiguous syllable (intermediate between /da/ and /ga/) in one ear and with disambiguating acoustic feature (third formant, F3) in the other ear. Multi-voxel pattern searchlight analysis was used to identify brain areas that consistently differentiated between response patterns associated with different syllable reports. By comparing responses to different stimuli with identical syllable reports and identical stimuli with different syllable reports, we disambiguated whether these regions primarily differentiated the acoustics of the stimuli or the syllable report. We found that BOLD activity patterns in left perisylvian regions (STG, SMG), left inferior frontal regions (vMC, IFG, AI), left supplementary motor cortex (SMA/pre-SMA), and right motor and somatosensory regions (M1/S1) represent listeners' syllable report irrespective of stimulus acoustics. Most of these regions are outside of what is traditionally regarded as auditory or phonological processing areas. Our results indicate that the process of speech sound categorization implicates decision-making mechanisms and auditory-motor transformations.
人类大脑中的哪些过程导致了语音的范畴感知?这个问题的研究受到了阻碍,因为正常情况下,语音的范畴感知与刺激的声学差异混淆在一起。然而,通过使用模糊的声音,可以将感知刺激表示与声学表示区分开来。27 名正常听力的个体参与了一项 fMRI 研究,他们在一只耳朵中听到一个模糊的音节(介于 /da/ 和 /ga/ 之间),在另一只耳朵中听到一个可区分的声学特征(第三共振峰,F3)。多体素模式搜索灯分析用于识别能够区分不同音节报告相关反应模式的大脑区域。通过比较具有相同音节报告的不同刺激的反应和具有不同音节报告的相同刺激的反应,我们可以确定这些区域是主要区分刺激的声学还是音节报告。我们发现,左颞叶区域(STG、SMG)、左额下回区域(vMC、IFG、AI)、左辅助运动皮层(SMA/pre-SMA)以及右运动和躯体感觉区域(M1/S1)的大脑活动模式代表了听者的音节报告,而与刺激的声学无关。这些区域中的大多数都不在传统的听觉或语音处理区域内。我们的结果表明,语音范畴感知的过程涉及决策机制和听觉-运动转换。