Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge CB2 7EF, United Kingdom.
School of Psychology, University of East Anglia, Norwich NR4 7TJ, United Kingdom.
J Neurosci. 2021 Jun 16;41(24):5263-5273. doi: 10.1523/JNEUROSCI.0083-21.2021. Epub 2021 May 10.
Most neuroimaging experiments that investigate how tools and their actions are represented in the brain use visual paradigms where tools or hands are displayed as 2D images and no real movements are performed. These studies discovered selective visual responses in occipitotemporal and parietal cortices for viewing pictures of hands or tools, which are assumed to reflect action processing, but this has rarely been directly investigated. Here, we examined the responses of independently visually defined category-selective brain areas when participants grasped 3D tools ( = 20; 9 females). Using real-action fMRI and multivoxel pattern analysis, we found that grasp typicality representations (i.e., whether a tool is grasped appropriately for use) were decodable from hand-selective areas in occipitotemporal and parietal cortices, but not from tool-, object-, or body-selective areas, even if partially overlapping. Importantly, these effects were exclusive for actions with tools, but not for biomechanically matched actions with control nontools. In addition, grasp typicality decoding was significantly higher in hand than tool-selective parietal regions. Notably, grasp typicality representations were automatically evoked even when there was no requirement for tool use and participants were naive to object category (tool vs nontools). Finding a specificity for typical tool grasping in hand-selective, rather than tool-selective, regions challenges the long-standing assumption that activation for viewing tool images reflects sensorimotor processing linked to tool manipulation. Instead, our results show that typicality representations for tool grasping are automatically evoked in visual regions specialized for representing the human hand, the primary tool of the brain for interacting with the world.
大多数研究大脑中工具及其动作如何被表示的神经影像学实验都使用视觉范式,在这些范式中,工具或手被显示为 2D 图像,而不进行真实的动作。这些研究发现,在观看手或工具的图片时,颞顶叶和顶叶皮层中存在选择性的视觉反应,这些反应被认为反映了动作处理,但这很少被直接研究。在这里,我们研究了当参与者抓取 3D 工具时( = 20;9 名女性),独立视觉定义的类别选择性大脑区域的反应。使用真实动作 fMRI 和多体素模式分析,我们发现,抓握典型性表征(即工具是否适合使用)可以从颞顶叶和顶叶皮质中的手选择性区域中解码出来,但不能从工具、物体或身体选择性区域中解码出来,即使部分重叠。重要的是,这些效应仅适用于具有工具的动作,而不适用于与控制非工具相匹配的生物力学动作。此外,抓握典型性解码在手选择性的顶叶区域中明显高于工具选择性的顶叶区域。值得注意的是,即使没有使用工具的要求,并且参与者对物体类别(工具与非工具)一无所知,抓握典型性的表示也会自动被唤起。在手选择性而不是工具选择性区域中发现典型工具抓握的特异性,挑战了长期以来的假设,即观看工具图像的激活反映了与工具操作相关的感觉运动处理。相反,我们的结果表明,用于抓握工具的典型性表示会自动在专门用于表示人手的视觉区域中被唤起,人手是大脑与世界互动的主要工具。