Correia Joao M, Jansma Bernadette M B, Bonte Milene
Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, and Maastricht Brain Imaging Center, 6229 EV Maastricht, The Netherlands
Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, and Maastricht Brain Imaging Center, 6229 EV Maastricht, The Netherlands.
J Neurosci. 2015 Nov 11;35(45):15015-25. doi: 10.1523/JNEUROSCI.0977-15.2015.
The brain's circuitry for perceiving and producing speech may show a notable level of overlap that is crucial for normal development and behavior. The extent to which sensorimotor integration plays a role in speech perception remains highly controversial, however. Methodological constraints related to experimental designs and analysis methods have so far prevented the disentanglement of neural responses to acoustic versus articulatory speech features. Using a passive listening paradigm and multivariate decoding of single-trial fMRI responses to spoken syllables, we investigated brain-based generalization of articulatory features (place and manner of articulation, and voicing) beyond their acoustic (surface) form in adult human listeners. For example, we trained a classifier to discriminate place of articulation within stop syllables (e.g., /pa/ vs /ta/) and tested whether this training generalizes to fricatives (e.g., /fa/ vs /sa/). This novel approach revealed generalization of place and manner of articulation at multiple cortical levels within the dorsal auditory pathway, including auditory, sensorimotor, motor, and somatosensory regions, suggesting the representation of sensorimotor information. Additionally, generalization of voicing included the right anterior superior temporal sulcus associated with the perception of human voices as well as somatosensory regions bilaterally. Our findings highlight the close connection between brain systems for speech perception and production, and in particular, indicate the availability of articulatory codes during passive speech perception.
Sensorimotor integration is central to verbal communication and provides a link between auditory signals of speech perception and motor programs of speech production. It remains highly controversial, however, to what extent the brain's speech perception system actively uses articulatory (motor), in addition to acoustic/phonetic, representations. In this study, we examine the role of articulatory representations during passive listening using carefully controlled stimuli (spoken syllables) in combination with multivariate fMRI decoding. Our approach enabled us to disentangle brain responses to acoustic and articulatory speech properties. In particular, it revealed articulatory-specific brain responses of speech at multiple cortical levels, including auditory, sensorimotor, and motor regions, suggesting the representation of sensorimotor information during passive speech perception.
大脑中用于感知和产生言语的神经回路可能存在显著程度的重叠,这对正常发育和行为至关重要。然而,感觉运动整合在言语感知中所起作用的程度仍极具争议。到目前为止,与实验设计和分析方法相关的方法学限制阻碍了对神经对声学与发音言语特征反应的区分。我们使用被动聆听范式和对单个试次的功能性磁共振成像(fMRI)对口语音节反应的多变量解码,研究了成年人类听众中基于大脑的发音特征(发音部位和方式以及浊音)在其声学(表面)形式之外的泛化情况。例如,我们训练一个分类器来区分塞音音节中的发音部位(例如,/pa/ 与 /ta/),并测试这种训练是否能泛化到擦音(例如,/fa/ 与 /sa/)。这种新颖的方法揭示了在背侧听觉通路内多个皮质水平上发音部位和方式的泛化,包括听觉、感觉运动、运动和体感区域,表明存在感觉运动信息的表征。此外,浊音的泛化包括与人类声音感知相关的右侧颞上沟前部以及双侧体感区域。我们的研究结果突出了大脑中言语感知和产生系统之间的紧密联系,特别是表明了在被动言语感知过程中发音编码的存在。
感觉运动整合是言语交流的核心,并在言语感知的听觉信号和言语产生的运动程序之间提供了联系。然而,大脑的言语感知系统除了声学/语音表征之外,在多大程度上积极使用发音(运动)表征仍极具争议。在本研究中,我们使用精心控制的刺激(口语音节)结合多变量fMRI解码,研究了被动聆听过程中发音表征的作用。我们的方法使我们能够区分大脑对声学和发音言语特性的反应。特别是,它揭示了在多个皮质水平上言语的发音特异性大脑反应,包括听觉、感觉运动和运动区域,表明在被动言语感知过程中存在感觉运动信息的表征。