Department of Acute and Tertiary Care Nursing, University of Pittsburgh, Pittsburgh, Pennsylvania, United States.
Department of Behavioral and Community Health Sciences, Graduate School of Public Health, University of Pittsburgh, Pittsburgh, Pennsylvania, United States.
Appl Clin Inform. 2023 Aug;14(4):789-802. doi: 10.1055/s-0043-1775565. Epub 2023 Oct 4.
Critical instability forecast and treatment can be optimized by artificial intelligence (AI)-enabled clinical decision support. It is important that the user-facing display of AI output facilitates clinical thinking and workflow for all disciplines involved in bedside care.
Our objective is to engage multidisciplinary users (physicians, nurse practitioners, physician assistants) in the development of a graphical user interface (GUI) to present an AI-derived risk score.
Intensive care unit (ICU) clinicians participated in focus groups seeking input on instability risk forecast presented in a prototype GUI. Two stratified rounds (three focus groups [only nurses, only providers, then combined]) were moderated by a focus group methodologist. After round 1, GUI design changes were made and presented in round 2. Focus groups were recorded, transcribed, and deidentified transcripts independently coded by three researchers. Codes were coalesced into emerging themes.
Twenty-three ICU clinicians participated (11 nurses, 12 medical providers [3 mid-level and 9 physicians]). Six themes emerged: (1) analytics transparency, (2) graphical interpretability, (3) impact on practice, (4) value of trend synthesis of dynamic patient data, (5) decisional weight (weighing AI output during decision-making), and (6) display location (usability, concerns for patient/family GUI view). Nurses emphasized having GUI objective information to support communication and optimal GUI location. While providers emphasized need for recommendation interpretability and concern for impairing trainee critical thinking. All disciplines valued synthesized views of vital signs, interventions, and risk trends but were skeptical of placing decisional weight on AI output until proven trustworthy.
Gaining input from all clinical users is important to consider when designing AI-derived GUIs. Results highlight that health care intelligent decisional support systems technologies need to be transparent on how they work, easy to read and interpret, cause little disruption to current workflow, as well as decisional support components need to be used as an adjunct to human decision-making.
人工智能(AI)支持的临床决策支持可以优化关键不稳定预测和治疗。重要的是,面向用户的 AI 输出显示界面便于所有参与床边护理的学科进行临床思维和工作流程。
我们的目标是让多学科用户(医生、护士从业者、医师助理)参与开发一个图形用户界面(GUI),以呈现 AI 衍生的风险评分。
重症监护病房(ICU)临床医生参与了焦点小组,就原型 GUI 中呈现的不稳定风险预测提出意见。两轮分层(三个焦点小组[只有护士,只有提供者,然后是综合])由焦点小组方法学家主持。在第一轮之后,对 GUI 设计进行了更改,并在第二轮中进行了介绍。焦点小组的录音、转录,并由三名研究人员独立对转录进行编码。代码被合并为新兴主题。
共有 23 名 ICU 临床医生参与(11 名护士,12 名医疗提供者[3 名中级和 9 名医生])。出现了六个主题:(1)分析透明度,(2)图形可解释性,(3)对实践的影响,(4)对动态患者数据趋势综合的价值,(5)决策权重(在决策过程中权衡 AI 输出),以及(6)显示位置(GUI 的可用性、对患者/家庭的关注)。护士强调 GUI 客观信息有助于支持沟通和 GUI 的最佳位置。而提供者则强调需要解释推荐内容,并担心会影响学员的批判性思维。所有学科都重视生命体征、干预措施和风险趋势的综合视图,但对将决策权重放在 AI 输出上持怀疑态度,直到证明值得信赖。
在设计 AI 衍生的 GUI 时,考虑从所有临床用户那里获取意见是很重要的。结果表明,医疗保健智能决策支持系统技术需要在其工作方式上透明,易于阅读和解释,对当前工作流程的干扰较小,并且决策支持组件需要作为人类决策的辅助手段使用。