Suppr超能文献

多视点图像在基于投影的显示系统中用于空间交互的可用性。

Usability of multiviewpoint images for spatial interaction in projection-based display systems.

作者信息

Simon Andreas

机构信息

Academy of Art and Design, University of Applied Sciences, Northwestern Switzerland, FHNW Art and Design, Aarau, Switzerland.

出版信息

IEEE Trans Vis Comput Graph. 2007 Jan-Feb;13(1):26-33. doi: 10.1109/TVCG.2007.23.

Abstract

In a common application scenario, large screen projection-based stereoscopic display systems are not used by a single user alone, but are shared by a small group of people. Using multiviewpoint images for multiuser interaction does not require special hardware and scales transparently with the number of colocated users in a system. We present a qualitative and quantitative study comparing usability and interaction performance for multiviewpoint images to non-head-tracked and head-tracked interaction for ray-casting selection and in-hand object manipulation. Results show that while direct first-person interaction in projection-based displays without head-tracking is difficult or even completely impractical, interaction with multiviewpoint images can produce similar or even better performance than fully head-tracked interaction. For ray-casting selection, interaction with multiviewpoint images is actually up to 10 percent faster than head-tracked interaction. For in-hand object manipulation in a simple docking task, multiviewpoint interaction performs only about 6 percent slower than fully head-tracked interaction.

摘要

在一个常见的应用场景中,基于大屏幕投影的立体显示系统并非由单个用户独自使用,而是由一小群人共享。使用多视点图像进行多用户交互不需要特殊硬件,并且能随着系统中同处一地的用户数量而透明扩展。我们进行了一项定性和定量研究,比较了多视点图像与非头部跟踪和头部跟踪交互在光线投射选择和手持物体操作方面的可用性和交互性能。结果表明,虽然在没有头部跟踪的基于投影的显示器中进行直接的第一人称交互很困难甚至完全不切实际,但与多视点图像的交互可以产生与完全头部跟踪交互相似甚至更好的性能。对于光线投射选择,与多视点图像的交互实际上比头部跟踪交互快达10%。对于简单对接任务中的手持物体操作,多视点交互的执行速度仅比完全头部跟踪交互慢约6%。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验