• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

可解释人工智能如何增加或降低临床医生对医疗保健中人工智能应用的信任:系统评价

How Explainable Artificial Intelligence Can Increase or Decrease Clinicians' Trust in AI Applications in Health Care: Systematic Review.

作者信息

Rosenbacke Rikard, Melhus Åsa, McKee Martin, Stuckler David

机构信息

Centre for Corporate Governance, Department of Accounting, Copenhagen Business School, Frederiksberg, Denmark.

Department of Medical Sciences, Clinical Microbiology, Uppsala University, Uppsala, Sweden.

出版信息

JMIR AI. 2024 Oct 30;3:e53207. doi: 10.2196/53207.

DOI:10.2196/53207
PMID:39476365
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11561425/
Abstract

BACKGROUND

Artificial intelligence (AI) has significant potential in clinical practice. However, its "black box" nature can lead clinicians to question its value. The challenge is to create sufficient trust for clinicians to feel comfortable using AI, but not so much that they defer to it even when it produces results that conflict with their clinical judgment in ways that lead to incorrect decisions. Explainable AI (XAI) aims to address this by providing explanations of how AI algorithms reach their conclusions. However, it remains unclear whether such explanations foster an appropriate degree of trust to ensure the optimal use of AI in clinical practice.

OBJECTIVE

This study aims to systematically review and synthesize empirical evidence on the impact of XAI on clinicians' trust in AI-driven clinical decision-making.

METHODS

A systematic review was conducted in accordance with PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, searching PubMed and Web of Science databases. Studies were included if they empirically measured the impact of XAI on clinicians' trust using cognition- or affect-based measures. Out of 778 articles screened, 10 met the inclusion criteria. We assessed the risk of bias using standard tools appropriate to the methodology of each paper.

RESULTS

The risk of bias in all papers was moderate or moderate to high. All included studies operationalized trust primarily through cognitive-based definitions, with 2 also incorporating affect-based measures. Out of these, 5 studies reported that XAI increased clinicians' trust compared with standard AI, particularly when the explanations were clear, concise, and relevant to clinical practice. In addition, 3 studies found no significant effect of XAI on trust, and the presence of explanations does not automatically improve trust. Notably, 2 studies highlighted that XAI could either enhance or diminish trust, depending on the complexity and coherence of the provided explanations. The majority of studies suggest that XAI has the potential to enhance clinicians' trust in recommendations generated by AI. However, complex or contradictory explanations can undermine this trust. More critically, trust in AI is not inherently beneficial, as AI recommendations are not infallible. These findings underscore the nuanced role of explanation quality and suggest that trust can be modulated through the careful design of XAI systems.

CONCLUSIONS

Excessive trust in incorrect advice generated by AI can adversely impact clinical accuracy, just as can happen when correct advice is distrusted. Future research should focus on refining both cognitive and affect-based measures of trust and on developing strategies to achieve an appropriate balance in terms of trust, preventing both blind trust and undue skepticism. Optimizing trust in AI systems is essential for their effective integration into clinical practice.

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4faa/11561425/b0a18e365772/ai_v3i1e53207_fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4faa/11561425/b0a18e365772/ai_v3i1e53207_fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4faa/11561425/b0a18e365772/ai_v3i1e53207_fig1.jpg
摘要

背景

人工智能(AI)在临床实践中具有巨大潜力。然而,其“黑箱”性质可能导致临床医生质疑其价值。挑战在于建立足够的信任,使临床医生在使用AI时感到安心,但又不能过度信任,以至于即使AI产生的结果与他们的临床判断相冲突并导致错误决策时,他们仍盲目听从。可解释人工智能(XAI)旨在通过解释AI算法如何得出结论来解决这一问题。然而,尚不清楚此类解释是否能培养出适当程度的信任,以确保AI在临床实践中的最佳应用。

目的

本研究旨在系统回顾和综合关于XAI对临床医生对AI驱动的临床决策信任度影响的实证证据。

方法

按照PRISMA(系统评价和Meta分析的首选报告项目)指南进行系统回顾,检索PubMed和Web of Science数据库。如果研究通过基于认知或情感的测量方法实证测量了XAI对临床医生信任度的影响,则纳入研究。在筛选的778篇文章中,有10篇符合纳入标准。我们使用适合每篇论文方法的标准工具评估偏倚风险。

结果

所有论文的偏倚风险为中度或中度至高度。所有纳入研究主要通过基于认知的定义来衡量信任度,其中2项研究还纳入了基于情感的测量方法。其中,5项研究报告称,与标准AI相比,XAI提高了临床医生的信任度,特别是当解释清晰、简洁且与临床实践相关时。此外,3项研究发现XAI对信任度没有显著影响,解释的存在并不会自动提高信任度。值得注意的是,2项研究强调,XAI根据所提供解释的复杂性和连贯性,既可以增强也可以削弱信任度。大多数研究表明,XAI有潜力增强临床医生对AI生成的建议的信任。然而,复杂或矛盾的解释可能会破坏这种信任。更关键的是,对AI的信任本身并不一定有益,因为AI的建议并非绝对可靠。这些发现强调了解释质量的细微差别作用,并表明可以通过精心设计XAI系统来调节信任度。

结论

对AI给出的错误建议过度信任可能会对临床准确性产生不利影响,就像不信任正确建议时一样。未来的研究应专注于完善基于认知和情感的信任度测量方法,并制定策略以在信任方面实现适当平衡,防止盲目信任和过度怀疑。优化对AI系统的信任对于它们有效融入临床实践至关重要。

相似文献

1
How Explainable Artificial Intelligence Can Increase or Decrease Clinicians' Trust in AI Applications in Health Care: Systematic Review.可解释人工智能如何增加或降低临床医生对医疗保健中人工智能应用的信任:系统评价
JMIR AI. 2024 Oct 30;3:e53207. doi: 10.2196/53207.
2
Essential properties and explanation effectiveness of explainable artificial intelligence in healthcare: A systematic review.可解释人工智能在医疗保健中的基本属性和解释效果:一项系统综述。
Heliyon. 2023 May 8;9(5):e16110. doi: 10.1016/j.heliyon.2023.e16110. eCollection 2023 May.
3
Decoding the black box: Explainable AI (XAI) for cancer diagnosis, prognosis, and treatment planning-A state-of-the art systematic review.解码黑箱:癌症诊断、预后和治疗计划的可解释人工智能(XAI)——最新系统评价。
Int J Med Inform. 2025 Jan;193:105689. doi: 10.1016/j.ijmedinf.2024.105689. Epub 2024 Nov 4.
4
A review of evaluation approaches for explainable AI with applications in cardiology.用于可解释人工智能并应用于心脏病学的评估方法综述。
Artif Intell Rev. 2024;57(9):240. doi: 10.1007/s10462-024-10852-w. Epub 2024 Aug 9.
5
Applications of Explainable Artificial Intelligence in Diagnosis and Surgery.可解释人工智能在诊断与手术中的应用。
Diagnostics (Basel). 2022 Jan 19;12(2):237. doi: 10.3390/diagnostics12020237.
6
A historical perspective of biomedical explainable AI research.生物医学可解释人工智能研究的历史视角。
Patterns (N Y). 2023 Sep 8;4(9):100830. doi: 10.1016/j.patter.2023.100830.
7
Unveiling the black box: A systematic review of Explainable Artificial Intelligence in medical image analysis.揭开黑箱:医学图像分析中可解释人工智能的系统综述。
Comput Struct Biotechnol J. 2024 Aug 12;24:542-560. doi: 10.1016/j.csbj.2024.08.005. eCollection 2024 Dec.
8
Human-centered evaluation of explainable AI applications: a systematic review.以人类为中心的可解释人工智能应用评估:一项系统综述。
Front Artif Intell. 2024 Oct 17;7:1456486. doi: 10.3389/frai.2024.1456486. eCollection 2024.
9
Explainable AI decision support improves accuracy during telehealth strep throat screening.可解释人工智能决策支持可提高远程医疗链球菌性喉炎筛查的准确性。
Commun Med (Lond). 2024 Jul 24;4(1):149. doi: 10.1038/s43856-024-00568-x.
10
Dermatologist-like explainable AI enhances trust and confidence in diagnosing melanoma.皮肤科医生般的可解释人工智能增强了对黑色素瘤诊断的信任和信心。
Nat Commun. 2024 Jan 15;15(1):524. doi: 10.1038/s41467-023-43095-4.

引用本文的文献

1
Role of artificial intelligence-based ocular biomarkers in hepatobiliary diseases: A scoping review.基于人工智能的眼部生物标志物在肝胆疾病中的作用:一项范围综述。
World J Hepatol. 2025 Aug 27;17(8):109801. doi: 10.4254/wjh.v17.i8.109801.
2
Integrating tumor location into artificial intelligence-based prognostic models in cancer.将肿瘤位置纳入基于人工智能的癌症预后模型。
World J Clin Oncol. 2025 Aug 24;16(8):109934. doi: 10.5306/wjco.v16.i8.109934.
3
Role of artificial intelligence in congenital heart disease.人工智能在先天性心脏病中的作用。

本文引用的文献

1
Placing Trust at the Heart of Health Policy and Systems.将信任置于卫生政策和体系的核心。
Int J Health Policy Manag. 2024;13:8410. doi: 10.34172/ijhpm.2024.8410. Epub 2024 May 7.
2
Essential properties and explanation effectiveness of explainable artificial intelligence in healthcare: A systematic review.可解释人工智能在医疗保健中的基本属性和解释效果:一项系统综述。
Heliyon. 2023 May 8;9(5):e16110. doi: 10.1016/j.heliyon.2023.e16110. eCollection 2023 May.
3
Non-task expert physicians benefit from correct explainable AI advice when reviewing X-rays.
World J Clin Pediatr. 2025 Sep 9;14(3):105926. doi: 10.5409/wjcp.v14.i3.105926.
4
A mixed methods evaluation of an antimicrobial prescribing clinical decision support system app.抗菌药物处方临床决策支持系统应用程序的混合方法评估
NPJ Antimicrob Resist. 2025 Aug 18;3(1):71. doi: 10.1038/s44259-025-00146-8.
5
Multimodal artificial intelligence for subepithelial lesion classification and characterization: a multicenter comparative study (with video).用于上皮下病变分类和特征描述的多模态人工智能:一项多中心比较研究(附视频)
BMC Med Inform Decis Mak. 2025 Aug 14;25(1):307. doi: 10.1186/s12911-025-03147-9.
6
Adoption and perception of LLM-based chatbots in health care: an exploratory cross-sectional survey of individuals with rheumatic diseases.基于大语言模型的聊天机器人在医疗保健中的应用与认知:对风湿病患者的探索性横断面调查
Rheumatol Adv Pract. 2025 Jul 12;9(3):rkaf083. doi: 10.1093/rap/rkaf083. eCollection 2025.
7
Writing the Future: Artificial Intelligence, Handwriting, and Early Biomarkers for Parkinson's Disease Diagnosis and Monitoring.书写未来:人工智能、笔迹与帕金森病诊断和监测的早期生物标志物
Biomedicines. 2025 Jul 18;13(7):1764. doi: 10.3390/biomedicines13071764.
8
Editorial: Integrated clinical management and neurorehabilitation for lumbosacral spinal diseases.社论:腰骶部脊柱疾病的综合临床管理与神经康复
Front Neurol. 2025 Jun 24;16:1590602. doi: 10.3389/fneur.2025.1590602. eCollection 2025.
9
The Effectiveness of a Custom AI Chatbot for Type 2 Diabetes Mellitus Health Literacy: Development and Evaluation Study.定制人工智能聊天机器人对2型糖尿病健康素养的有效性:开发与评估研究
J Med Internet Res. 2025 May 5;27:e70131. doi: 10.2196/70131.
10
Contouring in transition: perceptions of AI-based autocontouring by radiation oncologists and medical physicists in German-speaking countries.过渡中的轮廓描绘:德语国家放射肿瘤学家和医学物理学家对基于人工智能的自动轮廓描绘的看法
Strahlenther Onkol. 2025 Apr 28. doi: 10.1007/s00066-025-02403-1.
非专业医师在查看 X 光片时受益于可解释 AI 的正确建议。
Sci Rep. 2023 Jan 25;13(1):1383. doi: 10.1038/s41598-023-28633-w.
4
Does AI explainability affect physicians' intention to use AI?人工智能可解释性是否会影响医生使用人工智能的意愿?
Int J Med Inform. 2022 Dec;168:104884. doi: 10.1016/j.ijmedinf.2022.104884. Epub 2022 Oct 8.
5
UK reporting radiographers' perceptions of AI in radiographic image interpretation - Current perspectives and future developments.英国报告放射技师对放射影像解释中人工智能的看法——当前的观点和未来的发展。
Radiography (Lond). 2022 Nov;28(4):881-888. doi: 10.1016/j.radi.2022.06.006. Epub 2022 Jul 1.
6
Explainable Artificial Intelligence Methods in Combating Pandemics: A Systematic Review.可解释人工智能方法在抗击流行病中的应用:系统综述。
IEEE Rev Biomed Eng. 2023;16:5-21. doi: 10.1109/RBME.2022.3185953. Epub 2023 Jan 5.
7
Machine Learning for Clinical Decision-Making: Challenges and Opportunities in Cardiovascular Imaging.用于临床决策的机器学习:心血管成像中的挑战与机遇
Front Cardiovasc Med. 2022 Jan 4;8:765693. doi: 10.3389/fcvm.2021.765693. eCollection 2021.
8
AI in health and medicine.人工智能在医疗中的应用。
Nat Med. 2022 Jan;28(1):31-38. doi: 10.1038/s41591-021-01614-0. Epub 2022 Jan 20.
9
Explainable recommendation: when design meets trust calibration.可解释的推荐:设计与信任校准相遇时
World Wide Web. 2021;24(5):1857-1884. doi: 10.1007/s11280-021-00916-0. Epub 2021 Aug 2.
10
Trusting Automation: Designing for Responsivity and Resilience.信任自动化:为响应性和恢复力而设计。
Hum Factors. 2023 Feb;65(1):137-165. doi: 10.1177/00187208211009995. Epub 2021 Apr 27.