Kohlbecher Stefan, Schneider Erich
Chair for Clinical Neurosciences, University of Munich Hospital, Munich, Germany.
Ann N Y Acad Sci. 2009 May;1164:400-2. doi: 10.1111/j.1749-6632.2009.03776.x.
An extensible multiple-model Kalman filter framework for eye tracking and video-oculography (VOG) applications is proposed. The Kalman filter predicts future states of a system on the basis of a mathematical model and previous measurements. The predicted values are then compared against the current measurements. In a correcting step, the predicted state is enhanced by the measurements. In this work, the Kalman filter is used for smoothing the VOG data, for on-line classification of eye movements, as well as for predictive real-time control of a gaze-driven head-mounted camera (EyeSeeCam). With multiple models running in parallel, it was possible to distinguish between fixations, slow-phase eye movements, and saccades. Under the assumption that each class of eye movement follows a distinct model, one can decide which types of eye movement occurred by evaluating the probability for each model.
提出了一种用于眼动追踪和视频眼震图(VOG)应用的可扩展多模型卡尔曼滤波器框架。卡尔曼滤波器基于数学模型和先前的测量值预测系统的未来状态。然后将预测值与当前测量值进行比较。在校正步骤中,通过测量值增强预测状态。在这项工作中,卡尔曼滤波器用于平滑VOG数据、对眼动进行在线分类以及对凝视驱动的头戴式摄像头(EyeSeeCam)进行预测性实时控制。通过并行运行多个模型,可以区分注视、慢相眼动和扫视。假设每类眼动都遵循不同的模型,则可以通过评估每个模型的概率来确定发生了哪种类型的眼动。