Suppr超能文献

三维空间中几何精确的视觉引导伸手动作的计算。

Computations for geometrically accurate visually guided reaching in 3-D space.

作者信息

Blohm Gunnar, Crawford J Douglas

机构信息

Centre for Vision Research, York University, Toronto, Canada.

出版信息

J Vis. 2007 May 4;7(5):4.1-22. doi: 10.1167/7.5.4.

Abstract

A fundamental question in neuroscience is how the brain transforms visual signals into accurate three-dimensional (3-D) reach commands, but surprisingly this has never been formally modeled. Here, we developed such a model and tested its predictions experimentally in humans. Our visuomotor transformation model used visual information about current hand and desired target positions to compute the visual (gaze-centered) desired movement vector. It then transformed these eye-centered plans into shoulder-centered motor plans using extraretinal eye and head position signals accounting for the complete 3-D eye-in-head and head-on-shoulder geometry (i.e., translation and rotation). We compared actual memory-guided reaching performance to the predictions of the model. By removing extraretinal signals (i.e., eye-head rotations and the offset between the centers of rotation of the eye and head) from the model, we developed a compensation index describing how accurately the brain performs the 3-D visuomotor transformation for different head-restrained and head-unrestrained gaze positions as well as for eye and head roll. Overall, subjects did not show errors predicted when extraretinal signals were ignored. Their reaching performance was accurate and the compensation index revealed that subjects accounted for the 3-D visuomotor transformation geometry. This was also the case for the initial portion of the movement (before proprioceptive feedback) indicating that the desired reach plan is computed in a feed-forward fashion. These findings show that the visuomotor transformation for reaching implements an internal model of the complete eye-to-shoulder linkage geometry and does not only rely on feedback control mechanisms. We discuss the relevance of this model in predicting reaching behavior in several patient groups.

摘要

神经科学中的一个基本问题是大脑如何将视觉信号转化为精确的三维(3-D)伸手指令,但令人惊讶的是,这从未被正式建模。在此,我们开发了这样一个模型,并在人类身上通过实验测试了其预测结果。我们的视觉运动转换模型利用当前手部和期望目标位置的视觉信息来计算视觉(以注视为中心)期望运动向量。然后,它使用视网膜外眼睛和头部位置信号将这些以眼睛为中心的计划转换为以肩部为中心的运动计划,该信号考虑了完整的三维头内眼和肩上头的几何结构(即平移和旋转)。我们将实际的记忆引导伸手动作表现与模型的预测结果进行了比较。通过从模型中去除视网膜外信号(即眼头旋转以及眼睛和头部旋转中心之间的偏移),我们开发了一个补偿指数,用于描述大脑在不同头部受限和头部不受限的注视位置以及眼睛和头部滚动时,执行三维视觉运动转换的精确程度。总体而言,当忽略视网膜外信号时,受试者并未表现出预测的误差。他们的伸手动作表现准确,补偿指数表明受试者考虑了三维视觉运动转换几何结构。在动作的初始部分(在本体感觉反馈之前)也是如此,这表明期望的伸手计划是以一种前馈方式计算的。这些发现表明,用于伸手动作的视觉运动转换实现了完整的眼到肩连接几何结构的内部模型,并且不仅仅依赖于反馈控制机制。我们讨论了该模型在预测几个患者群体伸手行为方面的相关性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验