J Cogn Neurosci. 1993 Fall;5(4):408-35. doi: 10.1162/jocn.1993.5.4.408.
Abstract This paper describes a self-organizing neural model for eye-hand coordination. Called the DIRECT model, it embodies a solution of the classical motor equivalence problem. Motor equivalence computations allow humans and other animals to flexibly employ an arm with more degrees of freedom than the space in which it moves to carry out spatially defined tasks under conditions that may require novel joint configurations. During a motor babbling phase, the model endogenously generates movement commands that activate the correlated visual, spatial, and motor information that are used to learn its internal coordinate transformations. After learning occurs, the model is capable of controlling reaching movements of the arm to prescribed spatial targets using many different combinations of joints. When allowed visual feedback, the model can automatically perform, without additional learning, reaches with tools of variable lengths, with clamped joints, with distortions of visual input by a prism, and with unexpected perturbations. These compensatory computations occur within a single accurate reaching movement. No corrective movements are needed. Blind reaches using internal feedback have also been simulated. The model achieves its competence by transforming visual information about target position and end effector position in 3-D space into a body-centered spatial representation of the direction in 3-D space that the end effector must move to contact the target. The spatial direction vector is adaptively transformed into a motor direction vector, which represents the joint rotations that move the end effector in the desired spatial direction from the present arm configuration. Properties of the model are compared with psychophysical data on human reaching movements, neurophysiological data on the tuning curves of neurons in the monkey motor cortex, and alternative models of movement control.
本文描述了一种用于眼手协调的自组织神经模型。该模型称为 DIRECT 模型,它体现了经典运动等价问题的一种解决方案。运动等价计算允许人类和其他动物灵活地使用自由度比其运动空间更多的手臂,以在可能需要新的关节配置的条件下执行空间定义的任务。在运动咿呀学语阶段,该模型会自发生成运动指令,激活相关的视觉、空间和运动信息,这些信息用于学习其内部坐标变换。学习发生后,该模型能够使用许多不同的关节组合来控制手臂到达预定的空间目标。当允许视觉反馈时,该模型可以自动执行使用不同长度工具、固定关节、棱镜输入视觉失真以及意外干扰的到达运动,而无需额外学习。这些补偿计算发生在单次准确的到达运动中,无需进行校正运动。使用内部反馈的盲目到达也已被模拟。该模型通过将关于目标位置和末端效应器位置的视觉信息转换为 3-D 空间中的身体中心空间表示,来实现其能力,该表示是末端效应器必须移动以接触目标的 3-D 空间中的方向。空间方向向量自适应地转换为运动方向向量,它代表从当前手臂配置移动末端效应器到期望空间方向的关节旋转。该模型的特性与人类到达运动的心理物理学数据、猴子运动皮层神经元调谐曲线的神经生理学数据以及替代运动控制模型进行了比较。