文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

Fauxvea:众包眼球追踪估计用于可视化分析任务。

Fauxvea: Crowdsourcing Gaze Location Estimates for Visualization Analysis Tasks.

出版信息

IEEE Trans Vis Comput Graph. 2017 Feb;23(2):1042-1055. doi: 10.1109/TVCG.2016.2532331. Epub 2016 Feb 19.


DOI:10.1109/TVCG.2016.2532331
PMID:26915125
Abstract

We present the design and evaluation of a method for estimating gaze locations during the analysis of static visualizations using crowdsourcing. Understanding gaze patterns is helpful for evaluating visualizations and user behaviors, but traditional eye-tracking studies require specialized hardware and local users. To avoid these constraints, we developed a method called Fauxvea, which crowdsources visualization tasks on the Web and estimates gaze fixations through cursor interactions without eye-tracking hardware. We ran experiments to evaluate how gaze estimates from our method compare with eye-tracking data. First, we evaluated crowdsourced estimates for three common types of information visualizations and basic visualization tasks using Amazon Mechanical Turk (MTurk). In another, we reproduced findings from a previous eye-tracking study on tree layouts using our method on MTurk. Results from these experiments show that fixation estimates using Fauxvea are qualitatively and quantitatively similar to eye tracking on the same stimulus-task pairs. These findings suggest that crowdsourcing visual analysis tasks with static information visualizations could be a viable alternative to traditional eye-tracking studies for visualization research and design.

摘要

我们提出了一种使用众包技术估算在分析静态可视化时注视位置的方法的设计和评估。了解注视模式有助于评估可视化和用户行为,但传统的眼动追踪研究需要专门的硬件和本地用户。为了避免这些限制,我们开发了一种名为 Fauxvea 的方法,它通过在 Web 上众包可视化任务并通过光标交互而无需眼动追踪硬件来估算注视点。我们进行了实验来评估我们的方法与眼动追踪数据的注视估计值之间的比较。首先,我们使用亚马逊 Mechanical Turk (MTurk) 评估了三种常见类型的信息可视化和基本可视化任务的众包估计值。在另一个实验中,我们使用 MTurk 上的 Fauxvea 重现了先前在树布局上的眼动追踪研究的发现。这些实验的结果表明,使用 Fauxvea 的注视估计在定性和定量上与同一刺激任务对的眼动追踪相似。这些发现表明,对于可视化研究和设计,使用静态信息可视化进行众包视觉分析任务可能是传统眼动追踪研究的一种可行替代方案。

相似文献

[1]
Fauxvea: Crowdsourcing Gaze Location Estimates for Visualization Analysis Tasks.

IEEE Trans Vis Comput Graph. 2016-2-19

[2]
Does an Eye Tracker Tell the Truth about Visualizations?: Findings while Investigating Visualizations for Decision Making.

IEEE Trans Vis Comput Graph. 2012-12

[3]
eSeeTrack--visualizing sequential fixation patterns.

IEEE Trans Vis Comput Graph. 2010

[4]
Quantifying gaze and mouse interactions on spatial visual interfaces with a new movement analytics methodology.

PLoS One. 2017-8-4

[5]
Temporal coupling of eye gaze and cursor on key buttons during text-entry tasks.

Percept Mot Skills. 2014-2

[6]
Task and context determine where you look.

J Vis. 2007-12-19

[7]
The use of crowdsourcing in addiction science research: Amazon Mechanical Turk.

Exp Clin Psychopharmacol. 2019-2

[8]
Visualization of Data Regarding Infections Using Eye Tracking Techniques.

J Nurs Scholarsh. 2016-5

[9]
User-centered design of a web-based crowdsourcing-integrated semantic text annotation tool for building a mental health knowledge base.

J Biomed Inform. 2020-10

[10]
Effects of Depth Information on Visual Target Identification Task Performance in Shared Gaze Environments.

IEEE Trans Vis Comput Graph. 2020-5

引用本文的文献

[1]
Measuring cognitive load of digital interface combining event-related potential and BubbleView.

Brain Inform. 2023-3-3

[2]
Auditory salience using natural scenes: An online study.

J Acoust Soc Am. 2021-10

[3]
MouseView.js: Reliable and valid attention tracking in web-based experiments using a cursor-directed aperture.

Behav Res Methods. 2022-8

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索