Cho Hwayoung, Keenan Gail, Madandola Olatunde O, Dos Santos Fabiana Cristina, Macieira Tamara G R, Bjarnadottir Ragnhildur I, Priola Karen J B, Dunn Lopez Karen
College of Nursing, University of Florida, Gainesville, FL, United States.
College of Nursing, University of Iowa, Iowa City, IA, United States.
JMIR Hum Factors. 2022 May 10;9(2):e31758. doi: 10.2196/31758.
Poor usability is a primary cause of unintended consequences related to the use of electronic health record (EHR) systems, which negatively impacts patient safety. Due to the cost and time needed to carry out iterative evaluations, many EHR components, such as clinical decision support systems (CDSSs), have not undergone rigorous usability testing prior to their deployment in clinical practice. Usability testing in the predeployment phase is crucial to eliminating usability issues and preventing costly fixes that will be needed if these issues are found after the system's implementation.
This study presents an example application of a systematic evaluation method that uses clinician experts with human-computer interaction (HCI) expertise to evaluate the usability of an electronic clinical decision support (CDS) intervention prior to its deployment in a randomized controlled trial.
We invited 6 HCI experts to participate in a heuristic evaluation of our CDS intervention. Each expert was asked to independently explore the intervention at least twice. After completing the assigned tasks using patient scenarios, each expert completed a heuristic evaluation checklist developed by Bright et al based on Nielsen's 10 heuristics. The experts also rated the overall severity of each identified heuristic violation on a scale of 0 to 4, where 0 indicates no problems and 4 indicates a usability catastrophe. Data from the experts' coded comments were synthesized, and the severity of each identified usability heuristic was analyzed.
The 6 HCI experts included professionals from the fields of nursing (n=4), pharmaceutical science (n=1), and systems engineering (n=1). The mean overall severity scores of the identified heuristic violations ranged from 0.66 (flexibility and efficiency of use) to 2.00 (user control and freedom and error prevention), in which scores closer to 0 indicate a more usable system. The heuristic principle user control and freedom was identified as the most in need of refinement and, particularly by nonnursing HCI experts, considered as having major usability problems. In response to the heuristic match between system and the real world, the experts pointed to the reversed direction of our system's pain scale scores (1=severe pain) compared to those commonly used in clinical practice (typically 1=mild pain); although this was identified as a minor usability problem, its refinement was repeatedly emphasized by nursing HCI experts.
Our heuristic evaluation process is simple and systematic and can be used at multiple stages of system development to reduce the time and cost needed to establish the usability of a system before its widespread implementation. Furthermore, heuristic evaluations can help organizations develop transparent reporting protocols for usability, as required by Title IV of the 21st Century Cures Act. Testing of EHRs and CDSSs by clinicians with HCI expertise in heuristic evaluation processes has the potential to reduce the frequency of testing while increasing its quality, which may reduce clinicians' cognitive workload and errors and enhance the adoption of EHRs and CDSSs.
可用性差是与电子健康记录(EHR)系统使用相关的意外后果的主要原因,这对患者安全产生负面影响。由于进行迭代评估所需的成本和时间,许多EHR组件,如临床决策支持系统(CDSS),在临床实践中部署之前并未经过严格的可用性测试。部署前阶段的可用性测试对于消除可用性问题以及防止在系统实施后发现这些问题时所需的昂贵修复至关重要。
本研究展示了一种系统评估方法的示例应用,该方法使用具有人机交互(HCI)专业知识的临床专家在随机对照试验中部署电子临床决策支持(CDS)干预之前评估其可用性。
我们邀请了6位HCI专家参与对我们的CDS干预的启发式评估。要求每位专家至少独立探索该干预两次。在使用患者场景完成指定任务后,每位专家完成了由布莱特等人基于尼尔森的10条启发式原则开发的启发式评估清单。专家们还根据0至4的量表对每个识别出的启发式违规的整体严重程度进行评分,其中0表示没有问题,4表示可用性灾难。对专家编码评论的数据进行了综合,并分析了每个识别出的可用性启发式的严重程度。
6位HCI专家包括护理领域(n = 4)、药学领域(n = 1)和系统工程领域(n = 1)的专业人员。识别出的启发式违规的平均整体严重程度得分范围为0.66(使用的灵活性和效率)至2.00(用户控制与自由和错误预防),得分越接近0表明系统可用性越高。启发式原则用户控制与自由被认为最需要改进,特别是非护理HCI专家认为存在重大可用性问题。针对系统与现实世界之间的启发式匹配,专家们指出我们系统的疼痛量表评分方向(1 = 重度疼痛)与临床实践中常用的方向相反(通常1 = 轻度疼痛);尽管这被确定为一个小的可用性问题,但护理HCI专家反复强调需要改进。
我们的启发式评估过程简单且系统,可用于系统开发的多个阶段,以减少在系统广泛实施之前确定其可用性所需的时间和成本。此外,启发式评估可帮助组织根据《21世纪治愈法案》第四章的要求制定透明的可用性报告协议。在启发式评估过程中由具有HCI专业知识的临床医生对EHR和CDSS进行测试有可能减少测试频率,同时提高测试质量,这可能会减少临床医生的认知工作量和错误,并提高EHR和CDSS的采用率。