Department of Psychology Scarborough, University of Toronto, Toronto, Ontario M1C1A4, Canada.
Department of Psychology Scarborough, University of Toronto, Toronto, Ontario M1C1A4, Canada
J Neurosci. 2024 Aug 14;44(33):e2208232024. doi: 10.1523/JNEUROSCI.2208-23.2024.
The simple act of viewing and grasping an object involves complex sensorimotor control mechanisms that have been shown to vary as a function of multiple object and other task features such as object size, shape, weight, and wrist orientation. However, these features have been mostly studied in isolation. In contrast, given the nonlinearity of motor control, its computations require multiple features to be incorporated concurrently. Therefore, the present study tested the hypothesis that grasp computations integrate multiple task features superadditively in particular when these features are relevant for the same action phase. We asked male and female human participants to reach-to-grasp objects of different shapes and sizes with different wrist orientations. Also, we delayed the movement onset using auditory signals to specify which effector to use. Using electroencephalography and representative dissimilarity analysis to map the time course of cortical activity, we found that grasp computations formed superadditive integrated representations of grasp features during different planning phases of grasping. Shape-by-size representations and size-by-orientation representations occurred before and after effector specification, respectively, and could not be explained by single-feature models. These observations are consistent with the brain performing different preparatory, phase-specific computations; visual object analysis to identify grasp points at abstract visual levels; and downstream sensorimotor preparatory computations for reach-to-grasp trajectories. Our results suggest the brain adheres to the needs of nonlinear motor control for integration. Furthermore, they show that examining the superadditive influence of integrated representations can serve as a novel lens to map the computations underlying sensorimotor control.
观看和抓取物体的简单动作涉及复杂的感觉运动控制机制,这些机制已被证明会随多个物体和其他任务特征(如物体大小、形状、重量和手腕方向)的变化而变化。然而,这些特征大多是孤立地进行研究的。相比之下,由于运动控制的非线性,其计算需要多个特征同时纳入。因此,本研究检验了这样一个假设,即抓握计算会超加地整合多个任务特征,特别是当这些特征与同一动作阶段相关时。我们要求男性和女性人类参与者使用不同的手腕方向来抓取不同形状和大小的物体。此外,我们使用听觉信号延迟运动起始,以指定使用哪个效应器。通过脑电图和代表性不相似性分析来绘制皮质活动的时间过程,我们发现,在抓握的不同规划阶段,抓握计算会对抓握特征形成超加性的综合表示。形状-大小表示和大小-方向表示分别在效应器指定之前和之后发生,并且不能用单特征模型来解释。这些观察结果与大脑执行不同的预备、特定相位的计算、在抽象视觉水平上识别抓握点的视觉物体分析以及用于到达抓握轨迹的下游感觉运动预备计算一致。我们的结果表明,大脑符合非线性运动控制的整合需求。此外,它们表明,检查综合表示的超加影响可以作为一种新的镜头,来映射感觉运动控制的计算基础。