Institute for Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, UK.
Neuroimage. 2013 Jun;73:191-9. doi: 10.1016/j.neuroimage.2012.08.020. Epub 2012 Aug 17.
Production of actions is highly dependent on concurrent sensory information. In speech production, for example, movement of the articulators is guided by both auditory and somatosensory input. It has been demonstrated in non-human primates that self-produced vocalizations and those of others are differentially processed in the temporal cortex. The aim of the current study was to investigate how auditory and motor responses differ for self-produced and externally produced speech. Using functional neuroimaging, subjects were asked to produce sentences aloud, to silently mouth while listening to a different speaker producing the same sentence, to passively listen to sentences being read aloud, or to read sentences silently. We show that that separate regions of the superior temporal cortex display distinct response profiles to speaking aloud, mouthing while listening, and passive listening. Responses in anterior superior temporal cortices in both hemispheres are greater for passive listening compared with both mouthing while listening, and speaking aloud. This is the first demonstration that articulation, whether or not it has auditory consequences, modulates responses of the dorsolateral temporal cortex. In contrast posterior regions of the superior temporal cortex are recruited during both articulation conditions. In dorsal regions of the posterior superior temporal gyrus, responses to mouthing and reading aloud were equivalent, and in more ventral posterior superior temporal sulcus, responses were greater for reading aloud compared with mouthing while listening. These data demonstrate an anterior-posterior division of superior temporal regions where anterior fields are suppressed during motor output, potentially for the purpose of enhanced detection of the speech of others. We suggest posterior fields are engaged in auditory processing for the guidance of articulation by auditory information.
动作的产生高度依赖于并发的感觉信息。例如,在言语产生过程中,构音器官的运动既受听觉输入又受躯体感觉输入的指导。在非人类灵长类动物中已经证明,自我产生的发声和他人的发声在颞叶皮层中被不同地处理。本研究的目的是探讨听觉和运动反应如何因自我产生和外部产生的言语而不同。使用功能神经影像学,要求受试者大声说出句子,在听不同的说话者说出相同的句子时无声地动嘴,被动地听大声朗读的句子,或默读句子。我们表明,上颞叶皮层的不同区域对大声说话、听时动嘴和被动听有不同的反应模式。与听时动嘴和大声说话相比,左右半球前上颞叶皮层的反应更大。这是首次证明,无论是否有听觉后果,发音都会调节背外侧颞叶皮层的反应。相比之下,在发音条件下,后颞叶的后部区域被募集。在上颞叶后回的背侧部分,默读和大声朗读的反应相当,在上颞叶后沟的更腹侧部分,大声朗读的反应大于听时动嘴。这些数据表明,上颞叶区域存在前后部分的划分,在前部区域在运动输出期间被抑制,可能是为了增强对他人言语的检测。我们建议后区参与听觉处理,以便根据听觉信息指导发音。