Chowdhury Imran, Moeid Abdul, Hoque Enamul, Kabir Muhammad Ashad, Hossain Md Sabir, Islam Mohammad Mainul
Department of Computer Science and EngineeringChittagong University of Engineering and Technology Chittagong 4349 Bangladesh.
School of Information TechnologyYork University Toronto ON M3J 1P3 Canada.
IEEE Access. 2020 Dec 22;9:60-71. doi: 10.1109/ACCESS.2020.3046623. eCollection 2021.
Exploring and analyzing data using visualizations is at the heart of many decision-making tasks. Typically, people perform visual data analysis using mouse and touch interactions. While such interactions are often easy to use, they can be inadequate for users to express complex information and may require many steps to complete a task. Recently natural language interaction has emerged as a promising technique for supporting exploration with visualization, as the user can express a complex analytical question more easily. In this paper, we investigate how to synergistically combine language and mouse-based direct manipulations so that the weakness of one modality can be complemented by the other. To this end, we have developed a novel system, named Multimodal Interactions System for Visual Analysis (), that allows user to provide input using both natural language (e.g., through speech) and direct manipulation (e.g., through mouse or touch) and presents the answer accordingly. To answer the current question in the context of past interactions, the system incorporates previous utterances and direct manipulations made by the user within a finite-state model. The uniqueness of our approach is that unlike most previous approaches which typically support multimodal interactions with a single visualization, enables multimodal interactions with multiple coordinated visualizations of a dashboard that visually summarizes a dataset. We tested MIVA's applicability on several dashboards including a COVID-19 dashboard that visualizes coronavirus cases around the globe. We further empirically evaluated our system through a user study with twenty participants. The results of our study revealed that MIVA system enhances the flow of visual analysis by enabling fluid, iterative exploration and refinement of data in a dashboard with multiple-coordinated views.
使用可视化技术探索和分析数据是许多决策任务的核心。通常,人们使用鼠标和触摸交互来进行可视化数据分析。虽然这种交互通常易于使用,但对于用户表达复杂信息可能不够充分,并且可能需要许多步骤才能完成一项任务。最近,自然语言交互作为一种支持可视化探索的有前途的技术出现了,因为用户可以更轻松地表达复杂的分析问题。在本文中,我们研究如何将语言和基于鼠标的直接操作协同结合起来,以便一种模态的弱点可以由另一种模态来弥补。为此,我们开发了一种名为可视化分析多模态交互系统(MIVA)的新颖系统,该系统允许用户使用自然语言(例如通过语音)和直接操作(例如通过鼠标或触摸)提供输入,并相应地呈现答案。为了在过去交互的背景下回答当前问题,该系统在有限状态模型中纳入了用户之前的话语和直接操作。我们方法的独特之处在于,与大多数以前通常支持与单个可视化进行多模态交互的方法不同,MIVA能够与仪表板的多个协调可视化进行多模态交互,该仪表板以可视化方式汇总数据集。我们在包括一个可视化全球冠状病毒病例的COVID-19仪表板在内的多个仪表板上测试了MIVA的适用性。我们通过对20名参与者的用户研究进一步实证评估了我们的系统。我们的研究结果表明,MIVA系统通过在具有多个协调视图的仪表板中实现流畅、迭代的数据探索和细化,增强了可视化分析的流程。