Suppr超能文献

词语和图片概念内容的神经表示的时空特性 - 一项 MEG 研究。

Spatiotemporal properties of the neural representation of conceptual content for words and pictures - an MEG study.

机构信息

Center for Mind/Brain Science, University of Trento, Trento, Italy.

Department of Cognitive Science, Johns Hopkins University, Baltimore, United States.

出版信息

Neuroimage. 2020 Oct 1;219:116913. doi: 10.1016/j.neuroimage.2020.116913. Epub 2020 May 7.

Abstract

The entwined nature of perceptual and conceptual processes renders an understanding of the interplay between perceptual recognition and conceptual access a continuing challenge. Here, to disentangle perceptual and conceptual processing in the brain, we combine magnetoencephalography (MEG), picture and word presentation and representational similarity analysis (RSA). We replicate previous findings of early and robust sensitivity to semantic distances between objects presented as pictures and show earlier (~105 ​msec), but not later, representations can be accounted for by contemporary computer models of visual similarity (AlexNet). Conceptual content for word stimuli is reliably present in two temporal clusters, the first ranging from 230 to 335 ​msec, the second from 360 to ​585 msec. The time-course of picture induced semantic content and the spatial location of conceptual representation were highly convergent, and the spatial distribution of both differed from that of words. While this may reflect differences in picture and word induced conceptual access, this underscores potential confounds in visual perceptual and conceptual processing. On the other hand, using the stringent criterion that neural and conceptual spaces must align, the robust representation of semantic content by 230-240 msec for visually unconfounded word stimuli significantly advances estimates of the timeline of semantic access and its orthographic and lexical precursors.

摘要

知觉和概念过程的交织性质使得理解知觉识别和概念获取之间的相互作用成为一个持续的挑战。在这里,为了在大脑中分离知觉和概念加工,我们结合了脑磁图(MEG)、图片和文字呈现以及表象相似性分析(RSA)。我们复制了先前的发现,即对作为图片呈现的物体之间的语义距离具有早期且强大的敏感性,并表明更早(~105 毫秒),但不是更晚的,代表可以由当代视觉相似性的计算机模型(AlexNet)来解释。对于文字刺激,概念内容可靠地存在于两个时间集群中,第一个集群的范围从 230 到 335 毫秒,第二个集群的范围从 360 到 585 毫秒。图片引起的语义内容的时间进程和概念表示的空间位置高度一致,并且两者的空间分布都与文字不同。虽然这可能反映了图片和文字引起的概念获取的差异,但这突显了视觉感知和概念处理中潜在的混淆。另一方面,使用神经和概念空间必须对齐的严格标准,对于视觉上未混淆的文字刺激,230-240 毫秒时强大的语义内容表示显著提高了语义获取及其正字法和词汇前体的时间线的估计。

相似文献

7
Tracking neural coding of perceptual and semantic features of concrete nouns.追踪具体名词的知觉和语义特征的神经编码。
Neuroimage. 2012 Aug 1;62(1):451-63. doi: 10.1016/j.neuroimage.2012.04.048. Epub 2012 May 4.
10
Decoding the Cortical Dynamics of Sound-Meaning Mapping.解码声音-意义映射的皮层动力学
J Neurosci. 2017 Feb 1;37(5):1312-1319. doi: 10.1523/JNEUROSCI.2858-16.2016. Epub 2016 Dec 27.

引用本文的文献

7
Resolving the time course of visual and auditory object categorization.解决视觉和听觉物体分类的时间进程。
J Neurophysiol. 2022 Jun 1;127(6):1622-1628. doi: 10.1152/jn.00515.2021. Epub 2022 May 18.
10
Representations of conceptual information during automatic and active semantic access.自动和主动语义访问过程中的概念信息表示。
Neuropsychologia. 2021 Sep 17;160:107953. doi: 10.1016/j.neuropsychologia.2021.107953. Epub 2021 Jul 9.

本文引用的文献

10
Predicting the Time Course of Individual Objects with MEG.利用脑磁图预测单个物体的时间进程。
Cereb Cortex. 2015 Oct;25(10):3602-12. doi: 10.1093/cercor/bhu203. Epub 2014 Sep 9.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验