Suppr超能文献

通过数据和视觉设计探测从用户那里引出模型引导交互。

Eliciting Model Steering Interactions From Users via Data and Visual Design Probes.

出版信息

IEEE Trans Vis Comput Graph. 2024 Sep;30(9):6005-6019. doi: 10.1109/TVCG.2023.3322898. Epub 2024 Jul 31.

Abstract

Visual and interactive machine learning systems (IML) are becoming ubiquitous as they empower individuals with varied machine learning expertise to analyze data. However, it remains complex to align interactions with visual marks to a user's intent for steering machine learning models. We explore using data and visual design probes to elicit users' desired interactions to steer ML models via visual encodings within IML interfaces. We conducted an elicitation study with 20 data analysts with varying expertise in ML. We summarize our findings as pairs of target-interaction, which we compare to prior systems to assess the utility of the probes. We additionally surfaced insights about factors influencing how and why participants chose to interact with visual encodings, including refraining from interacting. Finally, we reflect on the value of gathering such formative empirical evidence via data and visual design probes ahead of developing IML prototypes.

摘要

可视化和交互式机器学习系统(IML)正变得无处不在,因为它们使具有不同机器学习专业知识的个人能够分析数据。然而,将交互与视觉标记对齐以满足用户引导机器学习模型的意图仍然很复杂。我们探索使用数据和视觉设计探测来通过 IML 界面中的视觉编码引出用户期望的交互,以引导 ML 模型。我们与 20 名具有不同 ML 专业知识的数据分析师进行了一项启发式研究。我们将我们的发现总结为目标交互对,我们将其与之前的系统进行比较,以评估探测的效用。我们还揭示了影响参与者选择与视觉编码交互以及避免交互的因素的见解,包括选择不交互的因素。最后,我们反思了在开发 IML 原型之前通过数据和视觉设计探测收集这种形成性实证证据的价值。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验