• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

评估可解释人工智能对临床医生决策的影响:一项关于重症监护病房住院时长预测的研究。

Evaluating the impact of explainable AI on clinicians' decision-making: A study on ICU length of stay prediction.

作者信息

Jung Jinsun, Kang Sunghoon, Choi Jeeyae, El-Kareh Robert, Lee Hyungbok, Kim Hyeoneui

机构信息

College of Nursing, Seoul National University, Seoul, Republic of Korea; Center for Human-Caring Nurse Leaders for the Future by Brain Korea 21 (BK 21) Four Project, College of Nursing, Seoul National University, Seoul, Republic of Korea.

The Department of Science Studies, Seoul National University, Seoul, Republic of Korea.

出版信息

Int J Med Inform. 2025 Sep;201:105943. doi: 10.1016/j.ijmedinf.2025.105943. Epub 2025 Apr 21.

DOI:10.1016/j.ijmedinf.2025.105943
PMID:40318498
Abstract

BACKGROUND

Explainable Artificial Intelligence (XAI) is increasingly vital in healthcare, where clinicians need to understand and trust AI-generated recommendations. However, the impact of AI model explanations on clinical decision-making remains insufficiently explored.

OBJECTIVES

To evaluate how AI model explanations influence clinicians' mental models, trust, and satisfaction regarding machine learning-based predictions of Intensive Care Unit (ICU) Length of Stay (LOS).

METHODS

This retrospective mixed-methods study analyzed electronic health record data from 8,579 patients admitted to a surgical ICU in South Korea between 2019 and 2022. Seven machine learning models were developed and evaluated to predict ICU LOS at 2-hour intervals during the initial 12 hours post-admission. The Random Forest (RF) model in the 10- to 12-hour window, with an AUROC of 0.903, was selected for explanation using SHapley Additive exPlanations. Fifteen ICU clinicians assessed four distinct types of explanations ('Why', 'Why not', 'How to', and 'What if') via web-based experiments, surveys, and interviews.

RESULTS

Clinicians' feature selections aligned more closely with the RF model after explanations, as demonstrated by an increase in Spearman correlation from -0.147 (p = 0.275) to 0.868 (p < 0.001). The average trust score improved from 2.8 to 3.9. The average satisfaction scores for the 'Why', 'Why not', 'How to', and 'What if' explanations were 3.3, 3.8, 3.6, and 4.1, respectively.

CONCLUSION

AI model explanations notably enhanced clinicians' understanding and trust in AI-generated ICU LOS predictions, although complete alignment with their mental models was not achieved. Further refinement of AI model explanations is needed to support better clinician-AI collaboration and its integration into clinical practice.

摘要

背景

可解释人工智能(XAI)在医疗保健领域日益重要,临床医生需要理解并信任人工智能生成的建议。然而,人工智能模型解释对临床决策的影响仍未得到充分探索。

目的

评估人工智能模型解释如何影响临床医生对基于机器学习的重症监护病房(ICU)住院时长(LOS)预测的心理模型、信任度和满意度。

方法

这项回顾性混合方法研究分析了2019年至2022年期间韩国一家外科ICU收治的8579例患者的电子健康记录数据。开发并评估了7种机器学习模型,以预测入院后最初12小时内每隔2小时的ICU住院时长。选择在10至12小时窗口内AUROC为0.903的随机森林(RF)模型,使用SHapley加性解释进行解释。15名ICU临床医生通过基于网络的实验、调查和访谈,评估了四种不同类型的解释(“为什么”、“为什么不”、“如何做”和“如果……会怎样”)。

结果

解释后,临床医生的特征选择与RF模型的一致性更高,斯皮尔曼相关性从-0.147(p = 0.275)提高到0.868(p < 0.001)。平均信任得分从2.8提高到3.9。“为什么”、“为什么不”、“如何做”和“如果……会怎样”解释的平均满意度得分分别为3.3、3.8、3.6和4.1。

结论

人工智能模型解释显著增强了临床医生对人工智能生成的ICU住院时长预测的理解和信任,尽管未完全与他们的心理模型一致。需要进一步完善人工智能模型解释,以支持更好的临床医生与人工智能协作,并将其融入临床实践。

相似文献

1
Evaluating the impact of explainable AI on clinicians' decision-making: A study on ICU length of stay prediction.评估可解释人工智能对临床医生决策的影响:一项关于重症监护病房住院时长预测的研究。
Int J Med Inform. 2025 Sep;201:105943. doi: 10.1016/j.ijmedinf.2025.105943. Epub 2025 Apr 21.
2
How Explainable Artificial Intelligence Can Increase or Decrease Clinicians' Trust in AI Applications in Health Care: Systematic Review.可解释人工智能如何增加或降低临床医生对医疗保健中人工智能应用的信任:系统评价
JMIR AI. 2024 Oct 30;3:e53207. doi: 10.2196/53207.
3
Investigating Protective and Risk Factors and Predictive Insights for Aboriginal Perinatal Mental Health: Explainable Artificial Intelligence Approach.探究原住民围产期心理健康的保护因素、风险因素及预测性见解:可解释人工智能方法
J Med Internet Res. 2025 Apr 30;27:e68030. doi: 10.2196/68030.
4
Dynamic and explainable machine learning prediction of mortality in patients in the intensive care unit: a retrospective study of high-frequency data in electronic patient records.动态可解释机器学习预测 ICU 患者死亡率:电子患者记录中高频数据的回顾性研究。
Lancet Digit Health. 2020 Apr;2(4):e179-e191. doi: 10.1016/S2589-7500(20)30018-2. Epub 2020 Mar 12.
5
An Explainable Artificial Intelligence Software Tool for Weight Management Experts (PRIMO): Mixed Methods Study.用于体重管理专家的可解释人工智能软件工具(PRIMO):混合方法研究。
J Med Internet Res. 2023 Sep 6;25:e42047. doi: 10.2196/42047.
6
Predicting admission to and length of stay in intensive care units after general anesthesia: Time-dependent role of pre- and intraoperative data for clinical decision-making.预测全身麻醉后重症监护病房的收治情况及住院时间:术前和术中数据在临床决策中的时间依赖性作用
J Clin Anesth. 2025 Apr;103:111810. doi: 10.1016/j.jclinane.2025.111810. Epub 2025 Mar 9.
7
Evaluating Explanations From AI Algorithms for Clinical Decision-Making: A Social Science-Based Approach.评估人工智能算法在临床决策中的解释:一种基于社会科学的方法。
IEEE J Biomed Health Inform. 2024 Jul;28(7):4269-4280. doi: 10.1109/JBHI.2024.3393719. Epub 2024 Jul 2.
8
Hospital Length of Stay Prediction for Planned Admissions Using Observational Medical Outcomes Partnership Common Data Model: Retrospective Study.利用观察医疗结局伙伴关系通用数据模型预测计划性入院的住院时间:回顾性研究。
J Med Internet Res. 2024 Nov 22;26:e59260. doi: 10.2196/59260.
9
Explainable machine learning model for prediction of 28-day all-cause mortality in immunocompromised patients in the intensive care unit: a retrospective cohort study based on MIMIC-IV database.用于预测重症监护病房免疫功能低下患者28天全因死亡率的可解释机器学习模型:一项基于MIMIC-IV数据库的回顾性队列研究
Eur J Med Res. 2025 May 3;30(1):358. doi: 10.1186/s40001-025-02622-3.
10
Systematic literature review on the application of explainable artificial intelligence in palliative care studies.关于可解释人工智能在姑息治疗研究中应用的系统文献综述。
Int J Med Inform. 2025 Aug;200:105914. doi: 10.1016/j.ijmedinf.2025.105914. Epub 2025 Apr 8.