Suppr超能文献

解码刺激物(工具手)和视角不变的抓握类型信息。

Decoding stimuli (tool-hand) and viewpoint invariant grasp-type information.

机构信息

Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal; Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal.

Center for Mind/ Brain Sciences (CIMeC), University of Trento, Rovereto, TN, Italy.

出版信息

Cortex. 2021 Jun;139:152-165. doi: 10.1016/j.cortex.2021.03.004. Epub 2021 Mar 19.

Abstract

When we see a manipulable object (henceforth tool) or a hand performing a grasping movement, our brain is automatically tuned to how that tool can be grasped (i.e., its affordance) or what kind of grasp that hand is performing (e.g., a power or precision grasp). However, it remains unclear where visual information related to tools or hands are transformed into abstract grasp representations. We therefore investigated where different levels of abstractness in grasp information are processed: grasp information that is invariant to the kind of stimuli that elicits it (tool-hand invariance); and grasp information that is hand-specific but viewpoint-invariant (viewpoint invariance). We focused on brain areas activated when viewing both tools and hands, i.e., the posterior parietal cortices (PPC), ventral premotor cortices (PMv), and lateral occipitotemporal cortex/posterior middle temporal cortex (LOTC/pMTG). To test for invariant grasp representations, we presented participants with tool images and grasp videos (from first or third person perspective; 1pp or 3pp) inside an MRI scanner, and cross-decoded power versus precision grasps across (i) grasp perspectives (viewpoint invariance), (ii) tool images and grasp 1pp videos (tool-hand 1pp invariance), and (iii) tool images and grasp 3pp videos (tool-hand 3pp invariance). Tool-hand 1pp, but not tool-hand 3pp, invariant grasp information was found in left PPC, whereas viewpoint-invariant information was found bilaterally in PPC, left PMv, and left LOTC/pMTG. These findings suggest different levels of abstractness-where visual information is transformed into stimuli-invariant grasp representations/tool affordances in left PPC, and viewpoint invariant but hand-specific grasp representations in the hand network.

摘要

当我们看到一个可操纵的物体(即工具)或一只手在执行抓握动作时,我们的大脑会自动调整如何抓住那个工具(即其可操作性)或手在执行哪种抓握(例如,力量抓握或精确抓握)。然而,目前尚不清楚与工具或手相关的视觉信息是如何转化为抽象的抓握表示的。因此,我们研究了不同抽象程度的抓握信息在何处被处理:对引起它的刺激类型不变的抓握信息(工具-手不变性);以及手特定但视角不变的抓握信息(视角不变性)。我们专注于当观看工具和手时激活的大脑区域,即后顶叶皮层(PPC)、腹侧前运动皮层(PMv)和外侧枕颞叶/后颞中回(LOTC/pMTG)。为了测试不变的抓握表示,我们在 MRI 扫描仪中向参与者呈现工具图像和抓握视频(从第一人称或第三人称视角;1pp 或 3pp),并在(i)抓握视角(视角不变性)、(ii)工具图像和抓握 1pp 视频(工具-手 1pp 不变性)以及(iii)工具图像和抓握 3pp 视频(工具-手 3pp 不变性)之间交叉解码力量抓握和精确抓握。发现了工具-手 1pp 不变性的抓握信息,但不是工具-手 3pp 不变性的抓握信息,位于左 PPC,而视角不变性的信息则在双侧 PPC、左 PMv 和左 LOTC/pMTG 中发现。这些发现表明,在左 PPC 中,视觉信息被转化为刺激不变的抓握表示/工具可操作性,而在手网络中,信息被转化为视角不变但手特定的抓握表示,存在不同的抽象程度。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验