文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

考察解释对人工智能诊断系统的满意度和信任度的影响。

Examining the effect of explanation on satisfaction and trust in AI diagnostic systems.

机构信息

Michigan Technological University, Houghton, MI, 49931, USA.

出版信息

BMC Med Inform Decis Mak. 2021 Jun 3;21(1):178. doi: 10.1186/s12911-021-01542-6.


DOI:10.1186/s12911-021-01542-6
PMID:34082719
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8176739/
Abstract

BACKGROUND: Artificial Intelligence has the potential to revolutionize healthcare, and it is increasingly being deployed to support and assist medical diagnosis. One potential application of AI is as the first point of contact for patients, replacing initial diagnoses prior to sending a patient to a specialist, allowing health care professionals to focus on more challenging and critical aspects of treatment. But for AI systems to succeed in this role, it will not be enough for them to merely provide accurate diagnoses and predictions. In addition, it will need to provide explanations (both to physicians and patients) about why the diagnoses are made. Without this, accurate and correct diagnoses and treatments might otherwise be ignored or rejected. METHOD: It is important to evaluate the effectiveness of these explanations and understand the relative effectiveness of different kinds of explanations. In this paper, we examine this problem across two simulation experiments. For the first experiment, we tested a re-diagnosis scenario to understand the effect of local and global explanations. In a second simulation experiment, we implemented different forms of explanation in a similar diagnosis scenario. RESULTS: Results show that explanation helps improve satisfaction measures during the critical re-diagnosis period but had little effect before re-diagnosis (when initial treatment was taking place) or after (when an alternate diagnosis resolved the case successfully). Furthermore, initial "global" explanations about the process had no impact on immediate satisfaction but improved later judgments of understanding about the AI. Results of the second experiment show that visual and example-based explanations integrated with rationales had a significantly better impact on patient satisfaction and trust than no explanations, or with text-based rationales alone. As in Experiment 1, these explanations had their effect primarily on immediate measures of satisfaction during the re-diagnosis crisis, with little advantage prior to re-diagnosis or once the diagnosis was successfully resolved. CONCLUSION: These two studies help us to draw several conclusions about how patient-facing explanatory diagnostic systems may succeed or fail. Based on these studies and the review of the literature, we will provide some design recommendations for the explanations offered for AI systems in the healthcare domain.

摘要

背景:人工智能有可能彻底改变医疗保健行业,它越来越多地被用于支持和辅助医疗诊断。人工智能的一个潜在应用是作为患者的第一个联系人,在将患者转介给专家之前,替代初始诊断,从而使医疗保健专业人员能够专注于治疗更具挑战性和关键性的方面。但是,为了使 AI 系统在这一角色中取得成功,仅仅提供准确的诊断和预测是不够的。此外,它还需要为医生和患者解释为什么做出这些诊断。如果没有这些解释,准确和正确的诊断和治疗可能会被忽视或拒绝。

方法:评估这些解释的有效性并了解不同类型的解释的相对有效性非常重要。在本文中,我们通过两个模拟实验来研究这个问题。对于第一个实验,我们测试了重新诊断场景,以了解局部和全局解释的效果。在第二个模拟实验中,我们在类似的诊断场景中实现了不同形式的解释。

结果:结果表明,解释有助于在关键的重新诊断期间提高满意度指标,但在重新诊断之前(初始治疗正在进行时)或之后(当替代诊断成功解决病例时)几乎没有影响。此外,关于该过程的初始“全局”解释对即时满意度没有影响,但提高了对 AI 的理解的后期判断。第二个实验的结果表明,与没有解释或仅使用基于文本的推理相比,视觉和基于示例的解释与推理相结合对患者满意度和信任度有显著的积极影响。与实验 1 一样,这些解释主要在重新诊断危机期间对即时满意度产生影响,在重新诊断之前或诊断成功解决后几乎没有优势。

结论:这两项研究帮助我们得出了关于面向患者的解释性诊断系统可能成功或失败的几个结论。基于这些研究和文献综述,我们将为医疗保健领域的 AI 系统提供一些解释设计建议。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc98/8176739/1d2918eb79c5/12911_2021_1542_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc98/8176739/84a873ea2980/12911_2021_1542_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc98/8176739/6553dca44e37/12911_2021_1542_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc98/8176739/b79f6ae24a87/12911_2021_1542_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc98/8176739/bc74b6baef61/12911_2021_1542_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc98/8176739/74a03da8b57b/12911_2021_1542_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc98/8176739/1d2918eb79c5/12911_2021_1542_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc98/8176739/84a873ea2980/12911_2021_1542_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc98/8176739/6553dca44e37/12911_2021_1542_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc98/8176739/b79f6ae24a87/12911_2021_1542_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc98/8176739/bc74b6baef61/12911_2021_1542_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc98/8176739/74a03da8b57b/12911_2021_1542_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc98/8176739/1d2918eb79c5/12911_2021_1542_Fig6_HTML.jpg

相似文献

[1]
Examining the effect of explanation on satisfaction and trust in AI diagnostic systems.

BMC Med Inform Decis Mak. 2021-6-3

[2]
Effect of AI Explanations on Human Perceptions of Patient-Facing AI-Powered Healthcare Systems.

J Med Syst. 2021-5-4

[3]
The Impact of Explanations on Layperson Trust in Artificial Intelligence-Driven Symptom Checker Apps: Experimental Study.

J Med Internet Res. 2021-11-3

[4]
How people reason with counterfactual and causal explanations for Artificial Intelligence decisions in familiar and unfamiliar domains.

Mem Cognit. 2023-10

[5]
Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas.

Cochrane Database Syst Rev. 2022-2-1

[6]
Trust in artificial intelligence for medical diagnoses.

Prog Brain Res. 2020

[7]
ExAID: A multimodal explanation framework for computer-aided diagnosis of skin lesions.

Comput Methods Programs Biomed. 2022-3

[8]
Measuring the Impact of AI in the Diagnosis of Hospitalized Patients: A Randomized Clinical Vignette Survey Study.

JAMA. 2023-12-19

[9]
Medically-oriented design for explainable AI for stress prediction from physiological measurements.

BMC Med Inform Decis Mak. 2022-2-11

[10]
Informing clinical assessment by contextualizing post-hoc explanations of risk prediction models in type-2 diabetes.

Artif Intell Med. 2023-3

引用本文的文献

[1]
Current Applications, Challenges, and Future Directions of Artificial Intelligence in Emergency Medicine: A Narrative Review.

Arch Acad Emerg Med. 2025-4-15

[2]
Investigating Whether AI Will Replace Human Physicians and Understanding the Interplay of the Source of Consultation, Health-Related Stigma, and Explanations of Diagnoses on Patients' Evaluations of Medical Consultations: Randomized Factorial Experiment.

J Med Internet Res. 2025-3-5

[3]
Prioritizing Trust in Podiatrists' Preference for AI in Supportive Roles Over Diagnostic Roles in Health Care: Qualitative Interview and Focus Group Study.

JMIR Hum Factors. 2025-2-21

[4]
Leveraging artificial intelligence to reduce diagnostic errors in emergency medicine: Challenges, opportunities, and future directions.

Acad Emerg Med. 2025-3

[5]
Expert gaze as a usability indicator of medical AI decision support systems: a preliminary study.

NPJ Digit Med. 2024-7-27

[6]
Concept-based reasoning in medical imaging.

Int J Comput Assist Radiol Surg. 2023-7

本文引用的文献

[1]
COVID-19 detection and heatmap generation in chest x-ray images.

J Med Imaging (Bellingham). 2021-1

[2]
Explainability for artificial intelligence in healthcare: a multidisciplinary perspective.

BMC Med Inform Decis Mak. 2020-11-30

[3]
AI Chatbot Design during an Epidemic Like the Novel Coronavirus.

Healthcare (Basel). 2020-6-3

[4]
What do senior physicians think about AI and clinical decision support systems: Quantitative and qualitative analysis of data from specialty societies.

Clin Med (Lond). 2020-5

[5]
Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator.

J Am Med Inform Assoc. 2020-4-1

[6]
Clustering Heatmap for Visualizing and Exploring Complex and High-dimensional Data Related to Chronic Kidney Disease.

J Clin Med. 2020-2-2

[7]
Near real-time intraoperative brain tumor diagnosis using stimulated Raman histology and deep neural networks.

Nat Med. 2020-1-6

[8]
Deep multi-instance heatmap regression for the detection of retinal vessel crossings and bifurcations in eye fundus images.

Comput Methods Programs Biomed. 2020-4

[9]
Accuracy of a Chatbot (Ada) in the Diagnosis of Mental Disorders: Comparative Case Study With Lay and Expert Users.

JMIR Form Res. 2019-10-29

[10]
Towards an Artificially Empathic Conversational Agent for Mental Health Applications: System Design and User Perceptions.

J Med Internet Res. 2018-6-26

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索