在觅食搜索任务中利用眼睛和头部运动特征解码目标可辨别性和时间压力。
Decoding target discriminability and time pressure using eye and head movement features in a foraging search task.
作者信息
Ries Anthony J, Callahan-Flintoft Chloe, Madison Anna, Dankovich Louis, Touryan Jonathan
机构信息
Humans in Complex Systems, U.S. Army DEVCOM Army Research Laboratory, 7101 Mulberry Point Rd, Aberdeen Proving Ground, MD, 21005, USA.
Warfighter Effectiveness Research Center, U.S. Air Force Academy, Colorado Springs, CO, 80840, USA.
出版信息
Cogn Res Princ Implic. 2025 Aug 22;10(1):53. doi: 10.1186/s41235-025-00657-y.
In military operations, rapid and accurate decision-making is crucial, especially in visually complex and high-pressure environments. This study investigates how eye and head movement metrics can infer changes in search behavior during a naturalistic shooting scenario in virtual reality (VR). Thirty-one participants performed a foraging search task using a head-mounted display (HMD) with integrated eye tracking. Participants searched for targets among distractors under varying levels of target discriminability (easy vs. hard) and time pressure (low vs. high). As expected, behavioral results indicated that increased discrimination difficulty and greater time pressure negatively impacted performance, leading to slower response times and reduced d-prime. Support vector classifiers assigned a search condition, discriminability and time pressure, to each trial based on eye and head movement features. Combined eye and head features produced the most accurate classification model for capturing tasked-induced changes in search behavior, with the combined model outperforming those based on eye or head features alone. While eye features demonstrated strong predictive power, the inclusion of head features significantly enhanced model performance. Across the ensemble of eye metrics, fixation-related features were the most robust for classifying target discriminability, while saccadic-related features played a similar role for time pressure. In contrast, models constrained to head metrics emphasized global movement (amplitude, velocity) for classifying discriminability but shifted toward kinematic intensity (acceleration, jerk) in time pressure condition. Together these results speak to the complementary role of eye and head movements in understanding search behavior under changing task parameters.
在军事行动中,快速准确的决策至关重要,尤其是在视觉复杂且压力巨大的环境中。本研究调查了在虚拟现实(VR)中的自然射击场景下,眼睛和头部运动指标如何推断搜索行为的变化。31名参与者使用集成了眼动追踪功能的头戴式显示器(HMD)执行觅食搜索任务。参与者在不同程度的目标可辨别性(容易与困难)和时间压力(低与高)条件下,在干扰物中搜索目标。正如预期的那样,行为结果表明,辨别难度增加和时间压力增大对表现产生了负面影响,导致反应时间变慢和d'值降低。支持向量分类器根据眼睛和头部运动特征为每个试验分配一个搜索条件、可辨别性和时间压力。眼睛和头部特征相结合产生了用于捕捉任务诱导的搜索行为变化的最准确分类模型,该组合模型优于仅基于眼睛或头部特征的模型。虽然眼睛特征显示出强大的预测能力,但纳入头部特征显著提高了模型性能。在所有眼睛指标中,与注视相关的特征在对目标可辨别性进行分类时最为稳健,而与扫视相关的特征在时间压力条件下发挥了类似作用。相比之下,受限于头部指标的模型在对可辨别性进行分类时强调全局运动(幅度、速度),但在时间压力条件下转向运动强度(加速度、急动度)。这些结果共同说明了眼睛和头部运动在理解不断变化的任务参数下的搜索行为中的互补作用。