文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

通过弥合临床医生的需求与开发者的目标来解决可解释人工智能难题。

Solving the explainable AI conundrum by bridging clinicians' needs and developers' goals.

作者信息

Bienefeld Nadine, Boss Jens Michael, Lüthy Rahel, Brodbeck Dominique, Azzati Jan, Blaser Mirco, Willms Jan, Keller Emanuela

机构信息

Department of Management, Technology, and Economics, ETH Zurich, Zürich, Switzerland.

Neurocritical Care Unit, Department of Neurosurgery and Institute of Intensive Care Medicine, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zürich, Switzerland.

出版信息

NPJ Digit Med. 2023 May 22;6(1):94. doi: 10.1038/s41746-023-00837-4.


DOI:10.1038/s41746-023-00837-4
PMID:37217779
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10202353/
Abstract

Explainable artificial intelligence (XAI) has emerged as a promising solution for addressing the implementation challenges of AI/ML in healthcare. However, little is known about how developers and clinicians interpret XAI and what conflicting goals and requirements they may have. This paper presents the findings of a longitudinal multi-method study involving 112 developers and clinicians co-designing an XAI solution for a clinical decision support system. Our study identifies three key differences between developer and clinician mental models of XAI, including opposing goals (model interpretability vs. clinical plausibility), different sources of truth (data vs. patient), and the role of exploring new vs. exploiting old knowledge. Based on our findings, we propose design solutions that can help address the XAI conundrum in healthcare, including the use of causal inference models, personalized explanations, and ambidexterity between exploration and exploitation mindsets. Our study highlights the importance of considering the perspectives of both developers and clinicians in the design of XAI systems and provides practical recommendations for improving the effectiveness and usability of XAI in healthcare.

摘要

可解释人工智能(XAI)已成为应对人工智能/机器学习在医疗保健领域实施挑战的一种很有前景的解决方案。然而,对于开发者和临床医生如何解读XAI以及他们可能存在哪些相互冲突的目标和要求,我们却知之甚少。本文介绍了一项纵向多方法研究的结果,该研究涉及112名开发者和临床医生,他们共同为一个临床决策支持系统共同设计一个XAI解决方案。我们的研究确定了开发者和临床医生对XAI的心智模型之间的三个关键差异,包括相反的目标(模型可解释性与临床合理性)、不同的真理来源(数据与患者)以及探索新知识与利用旧知识的作用。基于我们的研究结果,我们提出了一些设计解决方案,这些方案有助于解决医疗保健领域的XAI难题,包括使用因果推理模型、个性化解释以及在探索和利用思维模式之间保持灵活性。我们的研究强调了在XAI系统设计中考虑开发者和临床医生双方观点的重要性,并为提高XAI在医疗保健领域的有效性和可用性提供了实用建议。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/747d/10202941/c5143849b707/41746_2023_837_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/747d/10202941/158832427230/41746_2023_837_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/747d/10202941/121539349297/41746_2023_837_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/747d/10202941/c5143849b707/41746_2023_837_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/747d/10202941/158832427230/41746_2023_837_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/747d/10202941/121539349297/41746_2023_837_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/747d/10202941/c5143849b707/41746_2023_837_Fig3_HTML.jpg

相似文献

[1]
Solving the explainable AI conundrum by bridging clinicians' needs and developers' goals.

NPJ Digit Med. 2023-5-22

[2]
Essential properties and explanation effectiveness of explainable artificial intelligence in healthcare: A systematic review.

Heliyon. 2023-5-8

[3]
Demystifying XAI: Requirements for Understandable XAI Explanations.

Stud Health Technol Inform. 2024-8-22

[4]
Effects of explainable artificial intelligence in neurology decision support.

Ann Clin Transl Neurol. 2024-5

[5]
Exploring Explainable AI Techniques for Text Classification in Healthcare: A Scoping Review.

Stud Health Technol Inform. 2024-8-22

[6]
Explainable artificial intelligence approaches for brain-computer interfaces: a review and design space.

J Neural Eng. 2024-8-8

[7]
Applications of Explainable Artificial Intelligence in Diagnosis and Surgery.

Diagnostics (Basel). 2022-1-19

[8]
Explainable AI in medical imaging: An overview for clinical practitioners - Beyond saliency-based XAI approaches.

Eur J Radiol. 2023-5

[9]
Unveiling the black box: A systematic review of Explainable Artificial Intelligence in medical image analysis.

Comput Struct Biotechnol J. 2024-8-12

[10]
Explainable Artificial Intelligence Methods in Combating Pandemics: A Systematic Review.

IEEE Rev Biomed Eng. 2023

引用本文的文献

[1]
Explainable AI in medicine: challenges of integrating XAI into the future clinical routine.

Front Radiol. 2025-8-5

[2]
Transparent and Robust Artificial Intelligence-Driven Electrocardiogram Model for Left Ventricular Systolic Dysfunction.

Diagnostics (Basel). 2025-7-22

[3]
Application of artificial intelligence in echocardiography from 2009 to 2024: a bibliometric analysis.

Front Med (Lausanne). 2025-7-29

[4]
Development of an explainable machine learning model for Alzheimer's disease prediction using clinical and behavioural features.

MethodsX. 2025-7-7

[5]
Contrasting attitudes towards current and future artificial intelligence applications for computerised interpretation of electrocardiograms: a clinical stakeholder interview study.

JAMIA Open. 2025-7-21

[6]
Implementing Large Language Models in Health Care: Clinician-Focused Review With Interactive Guideline.

J Med Internet Res. 2025-7-11

[7]
A novel XAI framework for explainable AI-ECG using generative counterfactual XAI (GCX).

Sci Rep. 2025-7-2

[8]
Machine learning to detect melanoma exploiting nuclei morphology and Spatial organization.

Sci Rep. 2025-7-1

[9]
Discernibility in explanations: Designing more acceptable and meaningful machine learning models for medicine.

Comput Struct Biotechnol J. 2025-4-23

[10]
Practical AI application in psychiatry: historical review and future directions.

Mol Psychiatry. 2025-6-3

本文引用的文献

[1]
To explain or not to explain?-Artificial intelligence explainability in clinical decision support systems.

PLOS Digit Health. 2022-2-17

[2]
Addressing racial disparities in surgical care with machine learning.

NPJ Digit Med. 2022-9-30

[3]
A computational framework for discovering digital biomarkers of glycemic control.

NPJ Digit Med. 2022-8-8

[4]
Human-machine teaming is key to AI adoption: clinicians' experiences with a deployed machine learning system.

NPJ Digit Med. 2022-7-21

[5]
ICU Cockpit: a platform for collecting multimodal waveform data, AI-based computational disease modeling and real-time decision support in the intensive care unit.

J Am Med Inform Assoc. 2022-6-14

[6]
Developing, implementing and governing artificial intelligence in medicine: a step-by-step approach to prevent an artificial intelligence winter.

BMJ Health Care Inform. 2022-2

[7]
Re-focusing explainability in medicine.

Digit Health. 2022-2-11

[8]
AI in health and medicine.

Nat Med. 2022-1

[9]
The false hope of current approaches to explainable artificial intelligence in health care.

Lancet Digit Health. 2021-11

[10]
VBridge: Connecting the Dots Between Features and Data to Explain Healthcare Models.

IEEE Trans Vis Comput Graph. 2022-1

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索