Schneider Erich, Villgrattner Thomas, Vockeroth Johannes, Bartl Klaus, Kohlbecher Stefan, Bardins Stanislavs, Ulbrich Heinz, Brandt Thomas
Clinical Neurosciences, University of Munich Hospital, Munich, Germany.
Ann N Y Acad Sci. 2009 May;1164:461-7. doi: 10.1111/j.1749-6632.2009.03858.x.
The prototype of a gaze-controlled, head-mounted camera (EyeSeeCam) was developed that provides the functionality for fundamental studies on human gaze behavior even under dynamic conditions like locomotion. EyeSeeCam incorporates active visual exploration by saccades with image stabilization during head, object, and surround motion just as occurs in human ocular motor control. This prototype is a first attempt to combine free user mobility with image stabilization and unrestricted exploration of the visual surround in a man-made technical vision system. The gaze-driven camera is supplemented by an additional wide-angle, head-fixed scene camera. In this scene view, the focused gaze view is embedded with picture-in-picture functionality, which provides an approximation of the foveated retinal content. Such a combined video clip can be viewed more comfortably than the saccade-pervaded image of the gaze camera alone. EyeSeeCam consists of a video-oculography (VOG) device and a camera motion device. The benchmark for the evaluation of such a device is the vestibulo-ocular reflex (VOR), which requires a latency on the order of 10 msec between head and eye (camera) movements for proper image stabilization. A new lightweight VOG was developed that is able to synchronously measure binocular eye positions at up to 600 Hz. The camera motion device consists of a parallel kinematics setup with a backlash-free gimbal joint that is driven by piezo actuators with no reduction gears. As a result, the latency between the rotations of an artificial eye and the camera was 10 msec, which is VOR-like.
研发出了一款由视线控制的头戴式摄像头原型(EyeSeeCam),它能为人类注视行为的基础研究提供功能,即便在诸如移动等动态条件下也能如此。EyeSeeCam结合了通过扫视进行的主动视觉探索,以及在头部、物体和周围环境运动时的图像稳定功能,这与人类眼动控制中的情况相同。该原型是在人造技术视觉系统中,首次尝试将用户的自由移动性与图像稳定功能以及对视觉环境的无限制探索相结合。由视线驱动的摄像头还配备了一个额外的广角、固定在头部的场景摄像头。在这个场景视图中,聚焦的视线视图通过画中画功能嵌入其中,该功能提供了类似于中央凹视网膜内容的近似值。这样一个组合视频片段比单独的充满扫视的视线摄像头图像更易于观看。EyeSeeCam由一个视频眼动图(VOG)设备和一个摄像头运动设备组成。评估此类设备的基准是前庭眼反射(VOR),为实现适当的图像稳定,头部和眼睛(摄像头)运动之间的延迟需在10毫秒左右。研发出了一种新型轻量级VOG,它能够以高达600赫兹的频率同步测量双眼的眼位。摄像头运动设备由一个带有无间隙万向节的并联运动机构组成,该万向节由无减速齿轮的压电致动器驱动。结果,人造眼和摄像头旋转之间的延迟为10毫秒,这类似于VOR。