Wohlgemut Jared M, Pisirir Erhan, Kyrimi Evangelia, Stoner Rebecca S, Marsh William, Perkins Zane B, Tai Nigel R M
Centre for Trauma Sciences, Blizard Institute, Queen Mary University of London, London, UK.
Trauma Service, Royal London Hospital, Barts NHS Health Trust, London, UK.
JAMIA Open. 2023 Jul 12;6(3):ooad051. doi: 10.1093/jamiaopen/ooad051. eCollection 2023 Oct.
The aim of this study was to determine the methods and metrics used to evaluate the usability of mobile application Clinical Decision Support Systems (CDSSs) used in healthcare emergencies. Secondary aims were to describe the characteristics and usability of evaluated CDSSs.
A systematic literature review was conducted using Pubmed/Medline, Embase, Scopus, and IEEE Xplore databases. Quantitative data were descriptively analyzed, and qualitative data were described and synthesized using inductive thematic analysis.
Twenty-three studies were included in the analysis. The usability metrics most frequently evaluated were efficiency and usefulness, followed by user errors, satisfaction, learnability, effectiveness, and memorability. Methods used to assess usability included questionnaires in 20 (87%) studies, user trials in 17 (74%), interviews in 6 (26%), and heuristic evaluations in 3 (13%). Most CDSS inputs consisted of manual input (18, 78%) rather than automatic input (2, 9%). Most CDSS outputs comprised a recommendation (18, 78%), with a minority advising a specific treatment (6, 26%), or a score, risk level or likelihood of diagnosis (6, 26%). Interviews and heuristic evaluations identified more usability-related barriers and facilitators to adoption than did questionnaires and user testing studies.
A wide range of metrics and methods are used to evaluate the usability of mobile CDSS in medical emergencies. Input of information into CDSS was predominantly manual, impeding usability. Studies employing both qualitative and quantitative methods to evaluate usability yielded more thorough results.
When planning CDSS projects, developers should consider multiple methods to comprehensively evaluate usability.
本研究旨在确定用于评估医疗紧急情况下使用的移动应用临床决策支持系统(CDSS)可用性的方法和指标。次要目的是描述所评估的CDSS的特征和可用性。
使用PubMed/Medline、Embase、Scopus和IEEE Xplore数据库进行系统的文献综述。对定量数据进行描述性分析,对定性数据采用归纳主题分析进行描述和综合。
分析纳入了23项研究。最常评估的可用性指标是效率和有用性,其次是用户错误、满意度、可学习性、有效性和可记忆性。用于评估可用性的方法包括20项(87%)研究中的问卷调查、17项(74%)研究中的用户试验、6项(26%)研究中的访谈和3项(13%)研究中的启发式评估。大多数CDSS输入由手动输入组成(18项,78%),而非自动输入(2项,9%)。大多数CDSS输出包括一项建议(18项,78%),少数建议特定治疗(6项,26%),或评分、风险水平或诊断可能性(6项,26%)。与问卷调查和用户测试研究相比,访谈和启发式评估发现了更多与可用性相关的采用障碍和促进因素。
广泛的指标和方法用于评估医疗紧急情况下移动CDSS的可用性。向CDSS输入信息主要是手动的,这阻碍了可用性。采用定性和定量方法评估可用性的研究产生了更全面的结果。
在规划CDSS项目时,开发者应考虑多种方法以全面评估可用性。