Suppr超能文献

面向公共用户的多样化交互推荐:利用深度学习探索多视图可视化

Diverse Interaction Recommendation for Public Users Exploring Multi-view Visualization using Deep Learning.

作者信息

Li Yixuan, Qi Yusheng, Shi Yang, Chen Qing, Cao Nan, Chen Siming

出版信息

IEEE Trans Vis Comput Graph. 2023 Jan;29(1):95-105. doi: 10.1109/TVCG.2022.3209461. Epub 2022 Dec 16.

Abstract

Interaction is an important channel to offer users insights in interactive visualization systems. However, which interaction to operate and which part of data to explore are hard questions for public users facing a multi-view visualization for the first time. Making these decisions largely relies on professional experience and analytic abilities, which is a huge challenge for non-professionals. To solve the problem, we propose a method aiming to provide diverse, insightful, and real-time interaction recommendations for novice users. Building on the Long-Short Term Memory Model (LSTM) structure, our model captures users' interactions and visual states and encodes them in numerical vectors to make further recommendations. Through an illustrative example of a visualization system about Chinese poets in the museum scenario, the model is proven to be workable in systems with multi-views and multiple interaction types. A further user study demonstrates the method's capability to help public users conduct more insightful and diverse interactive explorations and gain more accurate data insights.

摘要

交互是在交互式可视化系统中为用户提供见解的重要渠道。然而,对于首次面对多视图可视化的普通用户来说,操作哪种交互以及探索数据的哪一部分是难题。做出这些决策很大程度上依赖于专业经验和分析能力,这对非专业人士来说是巨大挑战。为解决该问题,我们提出一种方法,旨在为新手用户提供多样、有见解且实时的交互建议。基于长短期记忆模型(LSTM)结构,我们的模型捕捉用户的交互和视觉状态,并将它们编码为数值向量以做出进一步建议。通过博物馆场景中关于中国诗人的可视化系统的示例,该模型在具有多视图和多种交互类型的系统中被证明是可行的。进一步的用户研究证明了该方法能够帮助普通用户进行更有见解和多样的交互式探索,并获得更准确的数据见解。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验