Cognitive Science Department, Rensselaer Polytechnic Institute, Troy, New York, United States of America.
Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America.
PLoS One. 2024 Mar 8;19(3):e0289855. doi: 10.1371/journal.pone.0289855. eCollection 2024.
When humans navigate through complex environments, they coordinate gaze and steering to sample the visual information needed to guide movement. Gaze and steering behavior have been extensively studied in the context of automobile driving along a winding road, leading to accounts of movement along well-defined paths over flat, obstacle-free surfaces. However, humans are also capable of visually guiding self-motion in environments that are cluttered with obstacles and lack an explicit path. An extreme example of such behavior occurs during first-person view drone racing, in which pilots maneuver at high speeds through a dense forest. In this study, we explored the gaze and steering behavior of skilled drone pilots. Subjects guided a simulated quadcopter along a racecourse embedded within a custom-designed forest-like virtual environment. The environment was viewed through a head-mounted display equipped with an eye tracker to record gaze behavior. In two experiments, subjects performed the task in multiple conditions that varied in terms of the presence of obstacles (trees), waypoints (hoops to fly through), and a path to follow. Subjects often looked in the general direction of things that they wanted to steer toward, but gaze fell on nearby objects and surfaces more often than on the actual path or hoops. Nevertheless, subjects were able to perform the task successfully, steering at high speeds while remaining on the path, passing through hoops, and avoiding collisions. In conditions that contained hoops, subjects adapted how they approached the most immediate hoop in anticipation of the position of the subsequent hoop. Taken together, these findings challenge existing models of steering that assume that steering is tightly coupled to where actors look. We consider the study's broader implications as well as limitations, including the focus on a small sample of highly skilled subjects and inherent noise in measurement of gaze direction.
当人类在复杂环境中导航时,他们会协调注视和转向,以获取指导运动所需的视觉信息。在沿着蜿蜒道路驾驶汽车的背景下,人们广泛研究了注视和转向行为,这些研究描述了在平坦、无障碍表面上沿着明确路径的运动。然而,人类也能够在充满障碍物且缺乏明确路径的环境中通过视觉引导自己的运动。这种行为的一个极端例子发生在第一人称视角的无人机竞速中,飞行员在高速下穿过茂密的森林。在这项研究中,我们探索了熟练的无人机飞行员的注视和转向行为。受试者在一个定制设计的森林状虚拟环境中沿着嵌入的赛道引导模拟四轴飞行器。环境通过配备眼动追踪器的头戴式显示器进行观察,以记录注视行为。在两个实验中,受试者在多种条件下执行任务,这些条件在障碍物(树木)、航点(飞过的环)和要遵循的路径的存在方面有所不同。受试者通常会朝着他们想要转向的方向看,但注视落在附近的物体和表面上的频率高于实际路径或环。尽管如此,受试者还是能够成功完成任务,以高速转向,保持在路径上,通过环,并避免碰撞。在包含环的条件下,受试者会根据最接近的环的位置来调整接近方式,以预测下一个环的位置。总的来说,这些发现挑战了现有的转向模型,这些模型假设转向与演员注视的位置紧密相关。我们还考虑了研究的更广泛意义和局限性,包括对小样本高度熟练的受试者的关注以及注视方向测量的固有噪声。