• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
When time is of the essence: ethical reconsideration of XAI in time-sensitive environments.当时间至关重要时:对时间敏感环境下可解释人工智能的伦理再思考。
J Med Ethics. 2025 Jul 23;51(8):516-520. doi: 10.1136/jme-2024-110046.
2
Ethical implications of AI-driven clinical decision support systems on healthcare resource allocation: a qualitative study of healthcare professionals' perspectives.人工智能驱动的临床决策支持系统对医疗资源分配的伦理影响:一项关于医疗专业人员观点的定性研究
BMC Med Ethics. 2024 Dec 21;25(1):148. doi: 10.1186/s12910-024-01151-8.
3
Exploring the Applications of Explainability in Wearable Data Analytics: Systematic Literature Review.探索可解释性在可穿戴数据分析中的应用:系统文献综述
J Med Internet Res. 2024 Dec 24;26:e53863. doi: 10.2196/53863.
4
Expectations and Requirements of Surgical Staff for an AI-Supported Clinical Decision Support System for Older Patients: Qualitative Study.外科医护人员对用于老年患者的人工智能支持临床决策支持系统的期望与要求:定性研究
JMIR Aging. 2024 Dec 17;7:e57899. doi: 10.2196/57899.
5
Sexual Harassment and Prevention Training性骚扰与预防培训
6
Systematic literature review on the application of explainable artificial intelligence in palliative care studies.关于可解释人工智能在姑息治疗研究中应用的系统文献综述。
Int J Med Inform. 2025 Aug;200:105914. doi: 10.1016/j.ijmedinf.2025.105914. Epub 2025 Apr 8.
7
Decoding the black box: Explainable AI (XAI) for cancer diagnosis, prognosis, and treatment planning-A state-of-the art systematic review.解码黑箱:癌症诊断、预后和治疗计划的可解释人工智能(XAI)——最新系统评价。
Int J Med Inform. 2025 Jan;193:105689. doi: 10.1016/j.ijmedinf.2024.105689. Epub 2024 Nov 4.
8
Designing Clinical Decision Support Systems (CDSS)-A User-Centered Lens of the Design Characteristics, Challenges, and Implications: Systematic Review.设计临床决策支持系统(CDSS)——基于用户中心视角的设计特征、挑战及影响:系统评价
J Med Internet Res. 2025 Jun 20;27:e63733. doi: 10.2196/63733.
9
Interventions to improve safe and effective medicines use by consumers: an overview of systematic reviews.改善消费者安全有效用药的干预措施:系统评价概述
Cochrane Database Syst Rev. 2014 Apr 29;2014(4):CD007768. doi: 10.1002/14651858.CD007768.pub3.
10
Perspectives of Health Care Professionals on the Use of AI to Support Clinical Decision-Making in the Management of Multiple Long-Term Conditions: Interview Study.医疗保健专业人员对使用人工智能支持多种慢性病管理中临床决策的看法:访谈研究
J Med Internet Res. 2025 Jul 4;27:e71980. doi: 10.2196/71980.

本文引用的文献

1
Transforming the cardiometabolic disease landscape: Multimodal AI-powered approaches in prevention and management.改变心血管代谢疾病格局:预防和管理中的多模态人工智能方法。
Cell Metab. 2024 Apr 2;36(4):670-683. doi: 10.1016/j.cmet.2024.02.002. Epub 2024 Feb 29.
2
Factors influencing clinician and patient interaction with machine learning-based risk prediction models: a systematic review.影响临床医生和患者与基于机器学习的风险预测模型交互的因素:系统评价。
Lancet Digit Health. 2024 Feb;6(2):e131-e144. doi: 10.1016/S2589-7500(23)00241-8.
3
Toward the eradication of medical diagnostic errors.迈向消除医学诊断错误。
Science. 2024 Jan 26;383(6681):eadn9602. doi: 10.1126/science.adn9602. Epub 2024 Jan 25.
4
Dermatologist-like explainable AI enhances trust and confidence in diagnosing melanoma.皮肤科医生般的可解释人工智能增强了对黑色素瘤诊断的信任和信心。
Nat Commun. 2024 Jan 15;15(1):524. doi: 10.1038/s41467-023-43095-4.
5
Measuring the Impact of AI in the Diagnosis of Hospitalized Patients: A Randomized Clinical Vignette Survey Study.测量人工智能在住院患者诊断中的影响:一项随机临床病例调查研究。
JAMA. 2023 Dec 19;330(23):2275-2284. doi: 10.1001/jama.2023.22295.
6
The influence of explainable vs non-explainable clinical decision support systems on rapid triage decisions: a mixed methods study.可解释与不可解释的临床决策支持系统对快速分诊决策的影响:一项混合方法研究。
BMC Med. 2023 Sep 19;21(1):359. doi: 10.1186/s12916-023-03068-2.
7
Explainable Artificial Intelligence and Cardiac Imaging: Toward More Interpretable Models.可解释人工智能与心脏成像:迈向更具解释力的模型
Circ Cardiovasc Imaging. 2023 Apr;16(4):e014519. doi: 10.1161/CIRCIMAGING.122.014519. Epub 2023 Apr 12.
8
Explainable AI and Multi-Modal Causability in Medicine.医学中的可解释人工智能与多模态因果关系
I Com (Berl). 2021 Jan 26;19(3):171-179. doi: 10.1515/icom-2020-0024. Epub 2021 Jan 15.
9
Surgical Rehearsal for Mitral Valve Repair: Personalizing Surgical Simulation by 3D Printing.二尖瓣修复手术预演:通过3D打印实现个性化手术模拟
Ann Thorac Surg. 2023 Apr;115(4):1062-1067. doi: 10.1016/j.athoracsur.2022.12.039. Epub 2023 Jan 10.
10
Multistain deep learning for prediction of prognosis and therapy response in colorectal cancer.多标记深度学习预测结直肠癌的预后和治疗反应。
Nat Med. 2023 Feb;29(2):430-439. doi: 10.1038/s41591-022-02134-1. Epub 2023 Jan 9.

当时间至关重要时:对时间敏感环境下可解释人工智能的伦理再思考。

When time is of the essence: ethical reconsideration of XAI in time-sensitive environments.

作者信息

Wabro Andreas, Herrmann Markus, Winkler Eva C

机构信息

National Center for Tumor Diseases (NCT) Heidelberg, NCT Heidelberg, a partnership between DKFZ and Heidelberg University Hospital, Germany, Heidelberg University, Medical Faculty Heidelberg, Heidelberg University Hospital, Department of Medical Oncology, Section Translational Medical Ethics, Heidelberg, Germany

National Center for Tumor Diseases (NCT) Heidelberg, NCT Heidelberg, a partnership between DKFZ and Heidelberg University Hospital, Germany, Heidelberg University, Medical Faculty Heidelberg, Heidelberg University Hospital, Department of Medical Oncology, Section Translational Medical Ethics, Heidelberg, Germany.

出版信息

J Med Ethics. 2025 Jul 23;51(8):516-520. doi: 10.1136/jme-2024-110046.

DOI:10.1136/jme-2024-110046
PMID:39299730
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12322429/
Abstract

The objective of explainable artificial intelligence systems designed for clinical decision support (XAI-CDSS) is to enhance physicians' diagnostic performance, confidence and trust through the implementation of interpretable methods, thus providing for a superior epistemic positioning, a robust foundation for critical reflection and trustworthiness in times of heightened technological dependence. However, recent studies have revealed shortcomings in achieving these goals, questioning the widespread endorsement of XAI by medical professionals, ethicists and policy-makers alike. Based on a surgical use case, this article challenges generalising calls for XAI-CDSS and emphasises the significance of time-sensitive clinical environments which frequently preclude adequate consideration of system explanations. Therefore, XAI-CDSS may not be able to meet expectations of augmenting clinical decision-making in specific circumstances where time is of the essence. This article, by employing a principled ethical balancing methodology, highlights several fallacies associated with XAI deployment in time-sensitive clinical situations and recommends XAI endorsement only where scientific evidence or stakeholder assessments do not contradict such deployment in specific target settings.

摘要

为临床决策支持设计的可解释人工智能系统(XAI-CDSS)的目标是,通过实施可解释方法来提高医生的诊断性能、信心和信任度,从而在技术依赖程度不断提高的时代,提供更高的认知定位、批判性反思的坚实基础和可信度。然而,最近的研究揭示了在实现这些目标方面存在的不足,这使得医学专业人员、伦理学家和政策制定者对XAI的广泛认可受到质疑。基于一个外科用例,本文对普遍呼吁使用XAI-CDSS提出质疑,并强调了时间敏感型临床环境的重要性,这种环境常常使人们无法充分考虑系统解释。因此,在时间至关重要的特定情况下,XAI-CDSS可能无法满足增强临床决策的期望。本文采用一种有原则的伦理平衡方法,突出了在时间敏感型临床情况下与XAI部署相关的几个谬误,并建议仅在科学证据或利益相关者评估不与特定目标环境中的此类部署相矛盾时,才认可XAI。