文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

可解释人工智能在神经病学决策支持中的作用。

Effects of explainable artificial intelligence in neurology decision support.

机构信息

Department of Pediatrics, Division of Neurology, Children's Healthcare of Atlanta, Emory University School of Medicine, Atlanta, GA, USA.

Georgia Institute of Technology, Atlanta, GA, USA.

出版信息

Ann Clin Transl Neurol. 2024 May;11(5):1224-1235. doi: 10.1002/acn3.52036. Epub 2024 Apr 5.


DOI:10.1002/acn3.52036
PMID:38581138
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11093252/
Abstract

OBJECTIVE: Artificial intelligence (AI)-based decision support systems (DSS) are utilized in medicine but underlying decision-making processes are usually unknown. Explainable AI (xAI) techniques provide insight into DSS, but little is known on how to design xAI for clinicians. Here we investigate the impact of various xAI techniques on a clinician's interaction with an AI-based DSS in decision-making tasks as compared to a general population. METHODS: We conducted a randomized, blinded study in which members of the Child Neurology Society and American Academy of Neurology were compared to a general population. Participants received recommendations from a DSS via a random assignment of an xAI intervention (decision tree, crowd sourced agreement, case-based reasoning, probability scores, counterfactual reasoning, feature importance, templated language, and no explanations). Primary outcomes included test performance and perceived explainability, trust, and social competence of the DSS. Secondary outcomes included compliance, understandability, and agreement per question. RESULTS: We had 81 neurology participants with 284 in the general population. Decision trees were perceived as the more explainable by the medical versus general population (P < 0.01) and as more explainable than probability scores within the medical population (P < 0.001). Increasing neurology experience and perceived explainability degraded performance (P = 0.0214). Performance was not predicted by xAI method but by perceived explainability. INTERPRETATION: xAI methods have different impacts on a medical versus general population; thus, xAI is not uniformly beneficial, and there is no one-size-fits-all approach. Further user-centered xAI research targeting clinicians and to develop personalized DSS for clinicians is needed.

摘要

目的:人工智能(AI)为基础的决策支持系统(DSS)在医学中得到了广泛应用,但决策过程背后的原理通常是未知的。可解释人工智能(xAI)技术为 DSS 提供了深入了解的途径,但对于如何为临床医生设计 xAI 技术,我们知之甚少。在这里,我们研究了与一般人群相比,各种 xAI 技术对临床医生在决策任务中与基于 AI 的 DSS 交互的影响。

方法:我们进行了一项随机、盲法研究,比较了儿童神经病学会和美国神经病学会成员与一般人群的差异。参与者通过 DSS 接收建议,通过 xAI 干预(决策树、众包共识、基于案例的推理、概率评分、反事实推理、特征重要性、模板语言和无解释)的随机分配来接收建议。主要结果包括测试表现以及对 DSS 的可解释性、信任和社会能力的感知。次要结果包括每个问题的合规性、可理解性和一致性。

结果:我们共有 81 名神经病学参与者和 284 名普通人群参与者。与一般人群相比,医学人群认为决策树比概率评分更具可解释性(P<0.01),也比一般人群更具可解释性(P<0.001)。增加神经病学经验和可感知的可解释性会降低表现(P=0.0214)。表现不是由 xAI 方法预测的,而是由感知的可解释性预测的。

解释:xAI 方法对医学人群和一般人群的影响不同;因此,xAI 并非普遍有益,也没有一刀切的方法。需要针对临床医生进行更多以用户为中心的 xAI 研究,并为临床医生开发个性化的 DSS。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ef8b/11093252/dafbd15807d1/ACN3-11-1224-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ef8b/11093252/f0699355c795/ACN3-11-1224-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ef8b/11093252/ab7682aa4008/ACN3-11-1224-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ef8b/11093252/dafbd15807d1/ACN3-11-1224-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ef8b/11093252/f0699355c795/ACN3-11-1224-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ef8b/11093252/ab7682aa4008/ACN3-11-1224-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ef8b/11093252/dafbd15807d1/ACN3-11-1224-g002.jpg

相似文献

[1]
Effects of explainable artificial intelligence in neurology decision support.

Ann Clin Transl Neurol. 2024-5

[2]
How Explainable Artificial Intelligence Can Increase or Decrease Clinicians' Trust in AI Applications in Health Care: Systematic Review.

JMIR AI. 2024-10-30

[3]
Evaluating Explainable Artificial Intelligence (XAI) techniques in chest radiology imaging through a human-centered Lens.

PLoS One. 2024

[4]
Exploring Algorithmic Explainability: Generating Explainable AI Insights for Personalized Clinical Decision Support Focused on Cannabis Intoxication in Young Adults.

2024 Int Conf Act Behav Comput (2024). 2024-5

[5]
Do explainable AI (XAI) methods improve the acceptance of AI in clinical practice? An evaluation of XAI methods on Gleason grading.

J Pathol Clin Res. 2025-3

[6]
The ethical requirement of explainability for AI-DSS in healthcare: a systematic review of reasons.

BMC Med Ethics. 2024-10-1

[7]
Applications of Explainable Artificial Intelligence in Diagnosis and Surgery.

Diagnostics (Basel). 2022-1-19

[8]
Should AI models be explainable to clinicians?

Crit Care. 2024-9-12

[9]
Systematic literature review on the application of explainable artificial intelligence in palliative care studies.

Int J Med Inform. 2025-8

[10]
Evaluating Explanations From AI Algorithms for Clinical Decision-Making: A Social Science-Based Approach.

IEEE J Biomed Health Inform. 2024-7

引用本文的文献

[1]
A Multi-Omics Integration Framework with Automated Machine Learning Identifies Peripheral Immune-Coagulation Biomarkers for Schizophrenia Risk Stratification.

Int J Mol Sci. 2025-8-7

[2]
Writing the Future: Artificial Intelligence, Handwriting, and Early Biomarkers for Parkinson's Disease Diagnosis and Monitoring.

Biomedicines. 2025-7-18

[3]
Machine learning in healthcare citizen science: A scoping review.

Int J Med Inform. 2025-3

[4]
Towards reconciling usability and usefulness of policy explanations for sequential decision-making systems.

Front Robot AI. 2024-7-22

本文引用的文献

[1]
Explainability pitfalls: Beyond dark patterns in explainable AI.

Patterns (N Y). 2024-6-14

[2]
Toward Explainable Artificial Intelligence for Precision Pathology.

Annu Rev Pathol. 2024-1-24

[3]
Artificial Intelligence-enabled Decision Support in Surgery: State-of-the-art and Future Directions.

Ann Surg. 2023-7-1

[4]
Review of Machine Learning and Artificial Intelligence (ML/AI) for the Pediatric Neurologist.

Pediatr Neurol. 2023-4

[5]
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.

Nat Mach Intell. 2019-5

[6]
Explainable artificial intelligence (XAI) in deep learning-based medical image analysis.

Med Image Anal. 2022-7

[7]
The false hope of current approaches to explainable artificial intelligence in health care.

Lancet Digit Health. 2021-11

[8]
Measuring the Quality of Explanations: The System Causability Scale (SCS): Comparing Human and Machine Explanations.

Kunstliche Intell (Oldenbourg). 2020

[9]
Causability and explainability of artificial intelligence in medicine.

Wiley Interdiscip Rev Data Min Knowl Discov. 2019

[10]
Visual Interpretation of Kernel-Based Prediction Models.

Mol Inform. 2011-9-5

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索