NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, 3-1 Wakamiya Morinosato, Atsugi, Kanagawa 243-0198, Japan.
NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, 3-1 Wakamiya Morinosato, Atsugi, Kanagawa 243-0198, Japan.
Curr Biol. 2022 Jun 20;32(12):2747-2753.e6. doi: 10.1016/j.cub.2022.04.065. Epub 2022 May 16.
Numerous studies have proposed that our adaptive motor behaviors depend on learning a map between sensory information and limb movement, called an "internal model." From this perspective, how the brain represents internal models is a critical issue in motor learning, especially regarding their association with spatial frames processed in motor planning. Extensive experimental evidence suggests that during planning stages for visually guided hand reaching, the brain transforms visual target representations in gaze-centered coordinates to motor commands in limb coordinates, via hand-target vectors in workspace coordinates. While numerous studies have intensively investigated whether the learning for reaching occurs in workspace or limb coordinates, the association of the learning with gaze coordinates still remains untested. Given the critical role of gaze-related spatial coding in reaching planning, the potential role of gaze states for learning is worth examining. Here, we show that motor memories for reaching are separately learned according to target location in gaze coordinates. Specifically, two opposing visuomotor rotations, which normally interfere with each other, can be simultaneously learned when each is associated with reaching to a foveal target and peripheral one. We also show that this gaze-dependent learning occurs in force-field adaptation. Furthermore, generalization of gaze-coupled reach adaptation is limited across central, right, and left visual fields. These results suggest that gaze states are available in the formation and recall of multiple internal models for reaching. Our findings provide novel evidence that a gaze-dependent spatial representation can provide a spatial coordinate framework for context-dependent motor learning.
许多研究表明,我们的适应性运动行为依赖于学习一种将感觉信息与肢体运动联系起来的映射,称为“内部模型”。从这个角度来看,大脑如何表示内部模型是运动学习中的一个关键问题,特别是关于它们与运动规划中处理的空间框架的关联。大量实验证据表明,在视觉引导的手伸向规划阶段,大脑通过工作空间中的手-目标向量,将视觉目标表示从注视中心坐标转换为肢体坐标中的运动指令。虽然许多研究已经深入研究了伸手运动的学习是在工作空间坐标还是肢体坐标中发生的,但学习与注视坐标的关联仍然未经检验。鉴于注视相关的空间编码在伸手规划中的关键作用,注视状态对于学习的潜在作用值得研究。在这里,我们表明,伸手运动记忆是根据注视坐标中的目标位置分别学习的。具体来说,当每个运动与注视中心目标和外周目标的伸手运动相关联时,两种通常相互干扰的相反的视动旋转可以同时学习。我们还表明,这种依赖于注视的学习发生在力场适应中。此外,注视耦合的伸手适应的泛化是有限的,跨越中央、右和左视野。这些结果表明,注视状态可用于形成和回忆多个用于伸手的内部模型。我们的发现提供了新的证据,表明依赖于注视的空间表示可以为上下文相关的运动学习提供空间坐标框架。