Fajen Brett R
Department of Cognitive Science, Rensselaer Polytechnic Institute, Carnegie Building 308, 110 Eighth Street, Troy, NY 12180-3590, USA.
Perception. 2005;34(6):717-40. doi: 10.1068/p5405.
Tasks such as steering, braking, and intercepting moving objects constitute a class of behaviors, known as visually guided actions, which are typically carried out under continuous control on the basis of visual information. Several decades of research on visually guided action have resulted in an inventory of control laws that describe for each task how information about the sufficiency of one's current state is used to make ongoing adjustments. Although a considerable amount of important research has been generated within this framework, several aspects of these tasks that are essential for successful performance cannot be captured. The purpose of this paper is to provide an overview of the existing framework, discuss its limitations, and introduce a new framework that emphasizes the necessity of calibration and perceptual learning. Within the proposed framework, successful human performance on these tasks is a matter of learning to detect and calibrate optical information about the boundaries that separate possible from impossible actions. This resolves a long-lasting incompatibility between theories of visually guided action and the concept of an affordance. The implications of adopting this framework for the design of experiments and models of visually guided action are discussed.
诸如转向、刹车以及拦截移动物体等任务构成了一类行为,被称为视觉引导动作,这类行为通常是在基于视觉信息的持续控制下执行的。几十年来对视觉引导动作的研究已经形成了一系列控制法则,这些法则针对每项任务描述了如何利用关于自身当前状态充足性的信息来进行持续调整。尽管在此框架内已经产生了大量重要研究,但这些任务中对于成功执行至关重要的几个方面仍无法被涵盖。本文的目的是概述现有框架,讨论其局限性,并引入一个强调校准和感知学习必要性的新框架。在所提出的框架内,人类在这些任务上的成功表现在于学会检测和校准关于区分可能动作与不可能动作边界的光学信息。这解决了视觉引导动作理论与可供性概念之间长期存在的不兼容性。文中还讨论了采用此框架对视觉引导动作实验设计和模型的影响。