Helen Wills Neuroscience Institute.
Berkeley Institute for Data Science.
J Neurosci. 2019 Sep 25;39(39):7722-7736. doi: 10.1523/JNEUROSCI.0675-19.2019. Epub 2019 Aug 19.
An integral part of human language is the capacity to extract meaning from spoken and written words, but the precise relationship between brain representations of information perceived by listening versus reading is unclear. Prior neuroimaging studies have shown that semantic information in spoken language is represented in multiple regions in the human cerebral cortex, while amodal semantic information appears to be represented in a few broad brain regions. However, previous studies were too insensitive to determine whether semantic representations were shared at a fine level of detail rather than merely at a coarse scale. We used fMRI to record brain activity in two separate experiments while participants listened to or read several hours of the same narrative stories, and then created voxelwise encoding models to characterize semantic selectivity in each voxel and in each individual participant. We find that semantic tuning during listening and reading are highly correlated in most semantically selective regions of cortex, and models estimated using one modality accurately predict voxel responses in the other modality. These results suggest that the representation of language semantics is independent of the sensory modality through which the semantic information is received. Humans can comprehend the meaning of words from both spoken and written language. It is therefore important to understand the relationship between the brain representations of spoken or written text. Here, we show that although the representation of semantic information in the human brain is quite complex, the semantic representations evoked by listening versus reading are almost identical. These results suggest that the representation of language semantics is independent of the sensory modality through which the semantic information is received.
人类语言的一个重要组成部分是从口语和书面语中提取意义的能力,但大脑对听力和阅读感知信息的代表之间的确切关系尚不清楚。先前的神经影像学研究表明,口语中的语义信息在人类大脑皮层的多个区域中得到了表示,而模态语义信息似乎在少数几个广泛的大脑区域中得到了表示。然而,之前的研究不够敏感,无法确定语义表示是否在精细的细节层面上共享,而不仅仅是在粗糙的尺度上。我们使用 fMRI 在两个单独的实验中记录参与者在听或读几个小时相同的叙述故事时的大脑活动,然后创建体素编码模型,以描述每个体素和每个个体参与者的语义选择性。我们发现,在大脑皮层的大多数语义选择性区域中,听和读时的语义调谐高度相关,并且一种模态中使用的模型可以准确地预测另一种模态中的体素响应。这些结果表明,语言语义的表示与接收语义信息的感觉模态无关。人类可以从口语和书面语中理解单词的意思。因此,了解口语或书面文本的大脑表示之间的关系非常重要。在这里,我们表明,尽管人类大脑中语义信息的表示非常复杂,但听力和阅读所引起的语义表示几乎相同。这些结果表明,语言语义的表示与接收语义信息的感觉模态无关。