Caglar Leyla Roksan, Walbrin Jon, Akwayena Emefa, Almeida Jorge, Mahon Bradford Z
Department of Psychology, Carnegie Mellon University, Pittsburgh, PA 15213.
Faculty of Psychology and Educational Sciences, Proaction Lab & Center for Research in Neuropsychology and Cognitive-Behavioral Interventions (CINECC), University of Coimbra, Coimbra 3000, Portugal.
Proc Natl Acad Sci U S A. 2025 Aug 26;122(34):e2421032122. doi: 10.1073/pnas.2421032122. Epub 2025 Aug 20.
The inferior parietal lobule supports action representations that are necessary to grasp and use objects in a functionally appropriate manner [S. H. Johnson-Frey, , 71-78 (2004)]. The supramarginal gyrus (SMG) is a structure within the inferior parietal lobule that specifically processes object-directed patterns of manipulation during functional object use. Here, we demonstrate that neural representations of complex object-directed actions in the SMG can be predicted by a linear encoding model that componentially builds complex actions from an empirically defined set of kinematic synergies. Each kinematic synergy represents a unique combination of finger, hand, wrist, and arm postures and movements. Control analyses demonstrate that models based on image-computable similarity (AlexNet, ResNet50, VGG16) robustly predict variance in visual areas, but not in the SMG. We also show that SMG activity is specifically modulated by kinematic (as opposed to visual) properties of object-directed actions. The action-relevant, as opposed to visually relevant, nature of the representations supported by the SMG aligns with findings from neuropsychological studies of upper limb apraxia. These findings support a model in which kinematic synergies are the basic unit of representation, out of which the SMG componentially builds object-directed actions. In combination with other findings [Q. Chen et al., , 2162-2174 (2018)], we suggest that kinematic synergies are related to complex object-directed actions in a similar way to how articulatory and voicing features combine to form phonological segments in spoken language production.
顶下小叶支持以功能上适当的方式抓取和使用物体所必需的动作表征[S. H. 约翰逊 - 弗雷, ,71 - 78(2004)]。缘上回(SMG)是顶下小叶内的一个结构,在功能性物体使用过程中专门处理与物体导向的操作模式。在这里,我们证明缘上回中复杂的物体导向动作的神经表征可以通过一个线性编码模型来预测,该模型从一组根据经验定义的运动协同中组合构建复杂动作。每个运动协同代表手指、手、腕和手臂姿势及运动的独特组合。对照分析表明,基于图像可计算相似度的模型(AlexNet、ResNet50、VGG16)能有力地预测视觉区域的方差,但不能预测缘上回的方差。我们还表明,缘上回的活动特别受物体导向动作的运动(而非视觉)属性调制。缘上回所支持的表征的与动作相关而非与视觉相关的性质,与上肢失用症神经心理学研究的结果一致。这些发现支持了一个模型,其中运动协同是表征的基本单位,缘上回从这些基本单位中组合构建物体导向动作。结合其他研究结果[Q. 陈等人, ,2162 - 2174(2018)],我们认为运动协同与复杂物体导向动作的关系,类似于发音和发声特征在口语产生中组合形成音位片段的方式。