Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, Room 535, San Francisco, California 94158, USA.
Center for Integrative Neuroscience, University of California, San Francisco, 675 Nelson Rising Lane, Room 535, San Francisco, California 94158, USA.
Nat Commun. 2016 Dec 20;7:13619. doi: 10.1038/ncomms13619.
Humans are adept at understanding speech despite the fact that our natural listening environment is often filled with interference. An example of this capacity is phoneme restoration, in which part of a word is completely replaced by noise, yet listeners report hearing the whole word. The neurological basis for this unconscious fill-in phenomenon is unknown, despite being a fundamental characteristic of human hearing. Here, using direct cortical recordings in humans, we demonstrate that missing speech is restored at the acoustic-phonetic level in bilateral auditory cortex, in real-time. This restoration is preceded by specific neural activity patterns in a separate language area, left frontal cortex, which predicts the word that participants later report hearing. These results demonstrate that during speech perception, missing acoustic content is synthesized online from the integration of incoming sensory cues and the internal neural dynamics that bias word-level expectation and prediction.
人类非常擅长理解言语,尽管我们的自然听觉环境经常充满干扰。这种能力的一个例子是音位恢复,其中一个单词的一部分完全被噪声取代,但听众报告说听到了整个单词。尽管这是人类听觉的一个基本特征,但对于这种无意识填补现象的神经基础仍不清楚。在这里,我们使用人类的直接皮质记录来证明,在双边听觉皮层中,缺失的语音在实时的声学-语音水平上得到了恢复。在这个恢复之前,左侧额皮质的一个单独的语言区域会出现特定的神经活动模式,这些模式可以预测参与者随后报告听到的单词。这些结果表明,在言语感知过程中,缺失的声学内容是从传入感觉线索的整合以及偏向于单词级别的期望和预测的内部神经动力学中在线合成的。