Krigolson Olav, Heath Matthew
School of Physical Education, University of Victoria, P.O. Box 3015, STN CSC, Victoria, BC, Canada V8W 3P1.
Hum Mov Sci. 2004 Dec;23(6):861-77. doi: 10.1016/j.humov.2004.10.011.
Recent research [e.g., Carrozzo, M., Stratta, F., McIntyre, J., & Lacquaniti, F. (2002). Cognitive allocentric representations of visual space shape pointing errors. Experimental Brain Research 147, 426-436; Lemay, M., Bertrand, C. P., & Stelmach, G. E. (2004). Pointing to an allocentric and egocentric remembered target. Motor Control, 8, 16-32] reported that egocentric and allocentric visual frames of reference can be integrated to facilitate the accuracy of goal-directed reaching movements. In the present investigation, we sought to specifically examine whether or not a visual background can facilitate the online, feedback-based control of visually-guided (VG), open-loop (OL), and memory-guided (i.e. 0 and 1000 ms of delay: D0 and D1000) reaches. Two background conditions were examined in this investigation. In the first background condition, four illuminated LEDs positioned in a square surrounding the target location provided a context for allocentric comparisons (visual background: VB). In the second condition, the target object was singularly presented against an empty visual field (no visual background: NVB). Participants (N=14) completed reaching movements to three midline targets in each background (VB, NVB) and visual condition (VG, OL, D0, D1000) for a total of 240 trials. VB reaches were more accurate and less variable than NVB reaches in each visual condition. Moreover, VB reaches elicited longer movement times and spent a greater proportion of the reaching trajectory in the deceleration phase of the movement. Supporting the benefit of a VB for online control, the proportion of endpoint variability explained by the spatial location of the limb at peak deceleration was less for VB as opposed to NVB reaches. These findings suggest that participants are able to make allocentric comparisons between a VB and target (visible or remembered) in addition to egocentric limb and VB comparisons to facilitate online reaching control.
近期研究[例如,卡罗佐,M.,斯特拉塔,F.,麦金太尔,J.,& 拉克奎尼蒂,F.(2002年)。视觉空间形状指向误差的认知以自我为中心的表征。《实验脑研究》147,426 - 436;勒梅,M.,伯特兰,C. P.,& 斯特尔马赫,G. E.(2004年)。指向以自我为中心和以客体为中心记忆的目标。《运动控制》,8,16 - 32]报告称,以自我为中心和以客体为中心的视觉参照系可以整合,以提高目标导向性伸手动作的准确性。在本研究中,我们试图具体考察视觉背景是否能促进对视觉引导(VG)、开环(OL)和记忆引导(即延迟0和1000毫秒:D0和D1000)伸手动作的在线、基于反馈的控制。本研究考察了两种背景条件。在第一种背景条件下,位于围绕目标位置的正方形中的四个发光二极管为以客体为中心的比较提供了背景(视觉背景:VB)。在第二种条件下,目标物体单独呈现在空视野中(无视觉背景:NVB)。参与者(N = 14)在每种背景(VB、NVB)和视觉条件(VG、OL、D0、D1000)下完成向三个中线目标的伸手动作,总共进行240次试验。在每种视觉条件下,与NVB伸手动作相比,VB伸手动作更准确且变异性更小。此外,VB伸手动作引发的运动时间更长,并且在运动减速阶段的伸手轨迹中所占比例更大。支持VB对在线控制有益的是,与NVB伸手动作相比,VB在峰值减速时由肢体空间位置解释的端点变异性比例更小。这些发现表明,除了以自我为中心的肢体与VB比较外,参与者还能够在VB与目标(可见或记忆)之间进行以客体为中心的比较,以促进在线伸手控制。