Suppr超能文献

医生对假设性机器学习风险计算器的理解、可解释性和信任。

Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator.

机构信息

Department of Medicine, University of Auckland, Auckland, New Zealand.

Department of Emergency Medicine, Whangarei Hospital, Whangarei, New Zealand.

出版信息

J Am Med Inform Assoc. 2020 Apr 1;27(4):592-600. doi: 10.1093/jamia/ocz229.

Abstract

OBJECTIVE

Implementation of machine learning (ML) may be limited by patients' right to "meaningful information about the logic involved" when ML influences healthcare decisions. Given the complexity of healthcare decisions, it is likely that ML outputs will need to be understood and trusted by physicians, and then explained to patients. We therefore investigated the association between physician understanding of ML outputs, their ability to explain these to patients, and their willingness to trust the ML outputs, using various ML explainability methods.

MATERIALS AND METHODS

We designed a survey for physicians with a diagnostic dilemma that could be resolved by an ML risk calculator. Physicians were asked to rate their understanding, explainability, and trust in response to 3 different ML outputs. One ML output had no explanation of its logic (the control) and 2 ML outputs used different model-agnostic explainability methods. The relationships among understanding, explainability, and trust were assessed using Cochran-Mantel-Haenszel tests of association.

RESULTS

The survey was sent to 1315 physicians, and 170 (13%) provided completed surveys. There were significant associations between physician understanding and explainability (P < .001), between physician understanding and trust (P < .001), and between explainability and trust (P < .001). ML outputs that used model-agnostic explainability methods were preferred by 88% of physicians when compared with the control condition; however, no particular ML explainability method had a greater influence on intended physician behavior.

CONCLUSIONS

Physician understanding, explainability, and trust in ML risk calculators are related. Physicians preferred ML outputs accompanied by model-agnostic explanations but the explainability method did not alter intended physician behavior.

摘要

目的

当机器学习(ML)影响医疗保健决策时,患者有权“了解相关逻辑的有意义信息”,这可能会限制 ML 的实施。鉴于医疗决策的复杂性,医生很可能需要理解和信任 ML 输出,然后向患者解释这些输出。因此,我们使用各种 ML 可解释性方法研究了医生对 ML 输出的理解、向患者解释这些输出的能力以及对 ML 输出的信任意愿之间的关系。

材料和方法

我们为有诊断难题的医生设计了一项调查,这些难题可以通过 ML 风险计算器来解决。医生被要求根据 3 种不同的 ML 输出来评估他们的理解、可解释性和信任程度。一个 ML 输出没有解释其逻辑(对照),而另外两个 ML 输出使用了不同的模型不可知的可解释性方法。使用 Cochran-Mantel-Haenszel 关联检验评估理解、可解释性和信任之间的关系。

结果

这项调查共发送给了 1315 名医生,有 170 名(13%)医生提供了完整的回复。医生的理解和可解释性之间(P<0.001)、理解和信任之间(P<0.001)以及可解释性和信任之间(P<0.001)存在显著关联。与对照条件相比,88%的医生更喜欢使用模型不可知的可解释性方法的 ML 输出;然而,没有任何特定的 ML 可解释性方法对医生的预期行为有更大的影响。

结论

医生对 ML 风险计算器的理解、可解释性和信任是相关的。医生更喜欢带有模型不可知解释的 ML 输出,但可解释性方法并没有改变医生的预期行为。

相似文献

8
Machine Learning Explainability in Breast Cancer Survival.乳腺癌生存中的机器学习可解释性
Stud Health Technol Inform. 2020 Jun 16;270:307-311. doi: 10.3233/SHTI200172.
9
Does AI explainability affect physicians' intention to use AI?人工智能可解释性是否会影响医生使用人工智能的意愿?
Int J Med Inform. 2022 Dec;168:104884. doi: 10.1016/j.ijmedinf.2022.104884. Epub 2022 Oct 8.

引用本文的文献

1
Deep learning applied in epilepsy: Bibliometric and visual analysis.深度学习在癫痫中的应用:文献计量与可视化分析
Digit Health. 2025 Sep 12;11:20552076251375840. doi: 10.1177/20552076251375840. eCollection 2025 Jan-Dec.

本文引用的文献

6
Artificial intelligence, bias and clinical safety.人工智能、偏差与临床安全。
BMJ Qual Saf. 2019 Mar;28(3):231-237. doi: 10.1136/bmjqs-2018-008370. Epub 2019 Jan 12.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验