Suppr超能文献

虚拟环境中视觉关注对象的实时跟踪及其在细节层次(LOD)中的应用。

Real-time tracking of visually attended objects in virtual environments and its application to LOD.

作者信息

Lee Sungkil, Kim Gerard Jounghyun, Choi Seungmoon

机构信息

Department of Computer Science and Engineering, POSTECH, Pohang, Korea.

出版信息

IEEE Trans Vis Comput Graph. 2009 Jan-Feb;15(1):6-19. doi: 10.1109/TVCG.2008.82.

Abstract

This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.

摘要

本文提出了一种实时框架,用于在交互式虚拟环境中导航时对用户视觉关注的对象进行计算跟踪。除了传统的自下而上(刺激驱动)显著性图之外,该框架还使用从用户的空间和时间行为推断出的自上而下(目标导向)上下文,并在对象显著性图中的候选对象中识别出最有可能被关注的对象。该计算框架使用GPU实现,展现出足以适用于交互式虚拟环境的高计算性能。还进行了一项用户实验,通过将该框架视为视觉关注的对象与用眼动仪收集的实际人类注视进行比较,来评估跟踪框架的预测准确性。结果表明,该准确性处于人类认知理论对视觉识别单个和多个关注目标的良好支持水平,特别是由于添加了自上而下的上下文信息。最后,我们展示了视觉注意力跟踪框架如何应用于管理虚拟环境中的细节级别,而无需任何头部或眼动跟踪硬件。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验