Division of Communication and Auditory Neuroscience, House Ear Institute, Los Angeles, California, USA.
Hum Brain Mapp. 2011 Oct;32(10):1660-76. doi: 10.1002/hbm.21139. Epub 2010 Sep 17.
The talking face affords multiple types of information. To isolate cortical sites with responsibility for integrating linguistically relevant visual speech cues, speech and nonspeech face gestures were presented in natural video and point-light displays during fMRI scanning at 3.0T. Participants with normal hearing viewed the stimuli and also viewed localizers for the fusiform face area (FFA), the lateral occipital complex (LOC), and the visual motion (V5/MT) regions of interest (ROIs). The FFA, the LOC, and V5/MT were significantly less activated for speech relative to nonspeech and control stimuli. Distinct activation of the posterior superior temporal sulcus and the adjacent middle temporal gyrus to speech, independent of media, was obtained in group analyses. Individual analyses showed that speech and nonspeech stimuli were associated with adjacent but different activations, with the speech activations more anterior. We suggest that the speech activation area is the temporal visual speech area (TVSA), and that it can be localized with the combination of stimuli used in this study.
说话人脸提供多种类型的信息。为了分离负责整合与语言相关的视觉语音线索的皮质部位,在 3.0T 的 fMRI 扫描中,以自然视频和点光显示呈现语音和非语音面部手势。正常听力的参与者观看了刺激物,还观看了梭状回面孔区(FFA)、外侧枕叶复合体(LOC)和视觉运动(V5/MT)感兴趣区(ROI)的定位器。FFA、LOC 和 V5/MT 对语音的激活程度明显低于非语音和对照刺激。在组分析中获得了后颞上沟和相邻颞中回对语音的独立激活,而与媒体无关。个体分析表明,语音和非语音刺激与相邻但不同的激活有关,语音激活更靠前。我们认为,语音激活区是颞视觉语音区(TVSA),并且可以使用本研究中使用的组合刺激来定位。