Suppr超能文献

自然场景的图片和文字表示所引起的神经活动模式相似的证据。

Evidence for similar patterns of neural activity elicited by picture- and word-based representations of natural scenes.

机构信息

Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, USA; Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, USA.

Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, USA; Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA; Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, USA.

出版信息

Neuroimage. 2017 Jul 15;155:422-436. doi: 10.1016/j.neuroimage.2017.03.037. Epub 2017 Mar 24.

Abstract

A long-standing core question in cognitive science is whether different modalities and representation types (pictures, words, sounds, etc.) access a common store of semantic information. Although different input types have been shown to activate a shared network of brain regions, this does not necessitate that there is a common representation, as the neurons in these regions could still differentially process the different modalities. However, multi-voxel pattern analysis can be used to assess whether, e.g., pictures and words evoke a similar pattern of activity, such that the patterns that separate categories in one modality transfer to the other. Prior work using this method has found support for a common code, but has two limitations: they have either only examined disparate categories (e.g. animals vs. tools) that are known to activate different brain regions, raising the possibility that the pattern separation and inferred similarity reflects only large scale differences between the categories or they have been limited to individual object representations. By using natural scene categories, we not only extend the current literature on cross-modal representations beyond objects, but also, because natural scene categories activate a common set of brain regions, we identify a more fine-grained (i.e. higher spatial resolution) common representation. Specifically, we studied picture- and word-based representations of natural scene stimuli from four different categories: beaches, cities, highways, and mountains. Participants passively viewed blocks of either phrases (e.g. "sandy beach") describing scenes or photographs from those same scene categories. To determine whether the phrases and pictures evoke a common code, we asked whether a classifier trained on one stimulus type (e.g. phrase stimuli) would transfer (i.e. cross-decode) to the other stimulus type (e.g. picture stimuli). The analysis revealed cross-decoding in the occipitotemporal, posterior parietal and frontal cortices. This similarity of neural activity patterns across the two input types, for categories that co-activate local brain regions, provides strong evidence of a common semantic code for pictures and words in the brain.

摘要

认知科学中长期存在的核心问题是不同模态和表示类型(图片、单词、声音等)是否访问共同的语义信息存储库。尽管已经表明不同的输入类型会激活共享的大脑区域网络,但这并不一定意味着存在共同的表示,因为这些区域中的神经元仍然可以对不同的模态进行差异处理。但是,可以使用多体素模式分析来评估例如图片和单词是否会引起类似的活动模式,从而使在一种模态中分离类别的模式可以转移到另一种模态。使用这种方法的先前工作已经发现了对通用代码的支持,但是存在两个局限性:它们要么仅检查了已知会激活不同大脑区域的不同类别(例如动物与工具),这使得模式分离和推断的相似性仅反映了类别的大尺度差异,要么仅限于单个物体表示。通过使用自然场景类别,我们不仅将跨模态表示的当前文献扩展到了物体之外,而且还因为自然场景类别激活了一组共同的大脑区域,因此我们确定了更精细的(即更高的空间分辨率)通用表示。具体来说,我们研究了来自四个不同类别的自然场景刺激的图片和基于单词的表示:海滩,城市,高速公路和山脉。参与者被动地观看了描述场景的短语(例如“沙滩”)或来自同一场景类别的照片的块。为了确定短语和图片是否唤起共同的代码,我们询问了一个在一种刺激类型(例如短语刺激)上训练的分类器是否会转移(即交叉解码)到另一种刺激类型(例如图片刺激)。分析结果显示在枕颞,顶后和额皮质中存在交叉解码。两种输入类型的神经活动模式相似,对于共同激活局部大脑区域的类别,为大脑中图片和单词的共同语义代码提供了强有力的证据。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验