Department of Biomedical Informatics, University of Cincinnati, Cincinnati, Ohio, USA.
Department of Biomedical Informatics and Medical Education, University of Washington School of Medicine, Seattle, Washington, USA.
J Am Med Inform Assoc. 2019 Apr 1;26(4):314-323. doi: 10.1093/jamia/ocy190.
This article reports results from a systematic literature review related to the evaluation of data visualizations and visual analytics technologies within the health informatics domain. The review aims to (1) characterize the variety of evaluation methods used within the health informatics community and (2) identify best practices.
A systematic literature review was conducted following PRISMA guidelines. PubMed searches were conducted in February 2017 using search terms representing key concepts of interest: health care settings, visualization, and evaluation. References were also screened for eligibility. Data were extracted from included studies and analyzed using a PICOS framework: Participants, Interventions, Comparators, Outcomes, and Study Design.
After screening, 76 publications met the review criteria. Publications varied across all PICOS dimensions. The most common audience was healthcare providers (n = 43), and the most common data gathering methods were direct observation (n = 30) and surveys (n = 27). About half of the publications focused on static, concentrated views of data with visuals (n = 36). Evaluations were heterogeneous regarding setting and measurements used.
When evaluating data visualizations and visual analytics technologies, a variety of approaches have been used. Usability measures were used most often in early (prototype) implementations, whereas clinical outcomes were most common in evaluations of operationally-deployed systems. These findings suggest opportunities for both (1) expanding evaluation practices, and (2) innovation with respect to evaluation methods for data visualizations and visual analytics technologies across health settings.
Evaluation approaches are varied. New studies should adopt commonly reported metrics, context-appropriate study designs, and phased evaluation strategies.
本文报告了一项系统文献综述的结果,该综述涉及健康信息学领域中数据可视化和可视化分析技术的评估。该综述旨在:(1)描述健康信息学社区中使用的各种评估方法;(2)确定最佳实践。
按照 PRISMA 指南进行系统文献综述。2017 年 2 月,使用代表利益相关概念的搜索词(医疗保健环境、可视化和评估)在 PubMed 上进行搜索。还对参考文献进行了筛选,以确定其是否符合纳入标准。从纳入的研究中提取数据,并使用 PICOS 框架(参与者、干预措施、比较、结果和研究设计)进行分析。
经过筛选,共有 76 篇出版物符合审查标准。出版物在所有 PICOS 维度上均存在差异。最常见的受众是医疗保健提供者(n=43),最常见的数据收集方法是直接观察(n=30)和调查(n=27)。约一半的出版物关注数据的静态、集中视图和可视化(n=36)。评估在设置和使用的测量方法方面存在差异。
在评估数据可视化和可视化分析技术时,已经使用了多种方法。可用性测量在早期(原型)实现中使用最多,而临床结果在操作部署系统的评估中最为常见。这些发现为数据可视化和可视化分析技术在各种健康环境中的评估实践提供了扩展机会,并为评估方法的创新提供了机会。
评估方法多种多样。新的研究应该采用常见的报告指标、适合上下文的研究设计和分阶段的评估策略。