Gopal Atul, Murthy Aditya
National Brain Research Centre, Manesar, Haryana, India; and.
Centre for Neuroscience, Indian Institute of Science, Bangalore, Karnataka, India
J Neurophysiol. 2015 Sep;114(3):1438-54. doi: 10.1152/jn.00276.2015. Epub 2015 Jun 17.
Many studies of reaching and pointing have shown significant spatial and temporal correlations between eye and hand movements. Nevertheless, it remains unclear whether these correlations are incidental, arising from common inputs (independent model); whether these correlations represent an interaction between otherwise independent eye and hand systems (interactive model); or whether these correlations arise from a single dedicated eye-hand system (common command model). Subjects were instructed to redirect gaze and pointing movements in a double-step task in an attempt to decouple eye-hand movements and causally distinguish between the three architectures. We used a drift-diffusion framework in the context of a race model, which has been previously used to explain redirect behavior for eye and hand movements separately, to predict the pattern of eye-hand decoupling. We found that the common command architecture could best explain the observed frequency of different eye and hand response patterns to the target step. A common stochastic accumulator for eye-hand coordination also predicts comparable variances, despite significant difference in the means of the eye and hand reaction time (RT) distributions, which we tested. Consistent with this prediction, we observed that the variances of the eye and hand RTs were similar, despite much larger hand RTs (∼90 ms). Moreover, changes in mean eye RTs, which also increased eye RT variance, produced a similar increase in mean and variance of the associated hand RT. Taken together, these data suggest that a dedicated circuit underlies coordinated eye-hand planning.
许多关于伸手和指向的研究表明,眼睛和手部动作之间存在显著的空间和时间相关性。然而,这些相关性是偶然的,源于共同的输入(独立模型);这些相关性是否代表原本独立的眼睛和手部系统之间的相互作用(交互模型);或者这些相关性是否源于单一的专用眼手系统(共同指令模型),目前仍不清楚。受试者被指示在双步任务中重新定向注视和指向动作,试图使眼手动作解耦,并从因果关系上区分这三种架构。我们在竞争模型的背景下使用了漂移扩散框架,该框架先前已被用于分别解释眼睛和手部动作的重定向行为,以预测眼手解耦的模式。我们发现,共同指令架构能够最好地解释观察到的眼睛和手部对目标步骤的不同反应模式的频率。尽管我们测试的眼睛和手部反应时间(RT)分布的均值存在显著差异,但用于眼手协调的共同随机累加器也预测了相当的方差。与这一预测一致,我们观察到,尽管手部RT长得多(约90毫秒),但眼睛和手部RT的方差相似。此外,平均眼睛RT的变化,这也增加了眼睛RT的方差,导致相关手部RT的均值和方差出现类似的增加。综上所述,这些数据表明,一个专用电路是协调眼手规划的基础。