Khoshkhoo Sattar, Leonard Matthew K, Mesgarani Nima, Chang Edward F
School of Medicine, University of California, San Francisco, 505 Parnassus Ave., San Francisco, CA 94143, United States.
Department of Neurological Surgery, University of California, San Francisco, 505 Parnassus Ave., San Francisco, CA 94143, United States; Center for Integrative Neuroscience, University of California, San Francisco, 675 Nelson Rising Ln., Room 535, San Francisco, CA 94158, United States; Weill Institute for Neurosciences, University of California, San Francisco, 675 Nelson Rising Ln., Room 535, San Francisco, CA 94158, United States.
Brain Lang. 2018 Dec;187:83-91. doi: 10.1016/j.bandl.2018.01.007. Epub 2018 Feb 4.
Auditory speech comprehension is the result of neural computations that occur in a broad network that includes the temporal lobe auditory cortex and the left inferior frontal cortex. It remains unclear how representations in this network differentially contribute to speech comprehension. Here, we recorded high-density direct cortical activity during a sine-wave speech (SWS) listening task to examine detailed neural speech representations when the exact same acoustic input is comprehended versus not comprehended. Listeners heard SWS sentences (pre-exposure), followed by clear versions of the same sentences, which revealed the content of the sounds (exposure), and then the same SWS sentences again (post-exposure). Across all three task phases, high-gamma neural activity in the superior temporal gyrus was similar, distinguishing different words based on bottom-up acoustic features. In contrast, frontal regions showed a more pronounced and sudden increase in activity only when the input was comprehended, which corresponded with stronger representational separability among spatiotemporal activity patterns evoked by different words. We observed this effect only in participants who were not able to comprehend the stimuli during the pre-exposure phase, indicating a relationship between frontal high-gamma activity and speech understanding. Together, these results demonstrate that both frontal and temporal cortical networks are involved in spoken language understanding, and that under certain listening conditions, frontal regions are involved in discriminating speech sounds.
听觉言语理解是在一个广泛的神经网络中进行神经计算的结果,该网络包括颞叶听觉皮层和左前额叶下部皮层。目前尚不清楚该网络中的表征如何对言语理解做出不同的贡献。在这里,我们在正弦波言语(SWS)听力任务期间记录了高密度的直接皮层活动,以检查在相同的声学输入被理解和未被理解时详细的神经言语表征。受试者先听SWS句子(预暴露),然后听相同句子的清晰版本,以揭示声音的内容(暴露),然后再次听相同的SWS句子(后暴露)。在所有三个任务阶段,颞上回的高伽马神经活动相似,基于自下而上的声学特征区分不同的单词。相比之下,只有当输入被理解时,额叶区域的活动才会出现更明显和突然的增加,这与不同单词诱发的时空活动模式之间更强的表征可分离性相对应。我们仅在预暴露阶段无法理解刺激的受试者中观察到这种效应,这表明额叶高伽马活动与言语理解之间存在关联。总之,这些结果表明额叶和颞叶皮层网络都参与了口语理解,并且在某些听力条件下,额叶区域参与了区分语音。