Department of Neurosurgery, Baylor College of Medicine, Texas, United States.
Michael E. DeBakey Veterans Affairs Medical Center, Texas, United States.
Elife. 2018 Feb 27;7:e30387. doi: 10.7554/eLife.30387.
Human faces contain multiple sources of information. During speech perception, visual information from the talker's mouth is integrated with auditory information from the talker's voice. By directly recording neural responses from small populations of neurons in patients implanted with subdural electrodes, we found enhanced visual cortex responses to speech when auditory speech was absent (rendering visual speech especially relevant). Receptive field mapping demonstrated that this enhancement was specific to regions of the visual cortex with retinotopic representations of the mouth of the talker. Connectivity between frontal cortex and other brain regions was measured with trial-by-trial power correlations. Strong connectivity was observed between frontal cortex and mouth regions of visual cortex; connectivity was weaker between frontal cortex and non-mouth regions of visual cortex or auditory cortex. These results suggest that top-down selection of visual information from the talker's mouth by frontal cortex plays an important role in audiovisual speech perception.
人脸包含多种信息源。在语音感知过程中,说话者嘴巴的视觉信息与说话者声音的听觉信息相整合。通过直接记录植入硬膜下电极的患者的小群体神经元的神经反应,我们发现当听觉语音缺失时(使视觉语音特别相关),视觉皮层对语音的反应增强。感受野映射表明,这种增强是特定于视觉皮层中具有说话者嘴巴的视网膜代表区域的。通过逐个试次的功率相关性来测量额皮质与其他脑区之间的连通性。观察到额皮质与视觉皮层的嘴区之间存在很强的连通性;额皮质与视觉皮层或听觉皮层的非嘴区之间的连通性较弱。这些结果表明,额皮质自上而下地从说话者的嘴巴中选择视觉信息,在视听语音感知中起着重要作用。