• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

可解释人工智能在人机协作中提高任务绩效。

Explainable AI improves task performance in human-AI collaboration.

作者信息

Senoner Julian, Schallmoser Simon, Kratzwald Bernhard, Feuerriegel Stefan, Netland Torbjørn

机构信息

ETH Zurich, Zurich, Switzerland.

EthonAI, Zurich, Switzerland.

出版信息

Sci Rep. 2024 Dec 28;14(1):31150. doi: 10.1038/s41598-024-82501-9.

DOI:10.1038/s41598-024-82501-9
PMID:39730794
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11681242/
Abstract

Artificial intelligence (AI) provides considerable opportunities to assist human work. However, one crucial challenge of human-AI collaboration is that many AI algorithms operate in a black-box manner where the way how the AI makes predictions remains opaque. This makes it difficult for humans to validate a prediction made by AI against their own domain knowledge. For this reason, we hypothesize that augmenting humans with explainable AI improves task performance in human-AI collaboration. To test this hypothesis, we implement explainable AI in the form of visual heatmaps in inspection tasks conducted by domain experts. Visual heatmaps have the advantage that they are easy to understand and help to localize relevant parts of an image. We then compare participants that were either supported by (a) black-box AI or (b) explainable AI, where the latter supports them to follow AI predictions when the AI is accurate or overrule the AI when the AI predictions are wrong. We conducted two preregistered experiments with representative, real-world visual inspection tasks from manufacturing and medicine. The first experiment was conducted with factory workers from an electronics factory, who performed [Formula: see text] assessments of whether electronic products have defects. The second experiment was conducted with radiologists, who performed [Formula: see text] assessments of chest X-ray images to identify lung lesions. The results of our experiments with domain experts performing real-world tasks show that task performance improves when participants are supported by explainable AI with heatmaps instead of black-box AI. We find that explainable AI as a decision aid improved the task performance by 7.7 percentage points (95% confidence interval [CI]: 3.3% to 12.0%, [Formula: see text]) in the manufacturing experiment and by 4.7 percentage points (95% CI: 1.1% to 8.3%, [Formula: see text]) in the medical experiment compared to black-box AI. These gains represent a significant improvement in task performance.

摘要

人工智能(AI)为协助人类工作提供了大量机会。然而,人机协作面临的一个关键挑战是,许多AI算法以黑箱方式运行,AI做出预测的方式仍然不透明。这使得人类难以根据自身领域知识来验证AI做出的预测。因此,我们假设,在人机协作中,用可解释AI增强人类能力可提高任务绩效。为了验证这一假设,我们在领域专家进行的检查任务中,以视觉热图的形式实现了可解释AI。视觉热图具有易于理解且有助于定位图像相关部分的优点。然后,我们比较了分别由(a)黑箱AI或(b)可解释AI提供支持的参与者,其中后者在AI预测准确时帮助他们遵循AI的预测,而在AI预测错误时则允许他们推翻AI的预测。我们针对制造业和医学领域具有代表性的实际视觉检查任务进行了两项预先注册的实验。第一个实验是与一家电子厂的工人合作进行的,他们对电子产品是否有缺陷进行[公式:见正文]评估。第二个实验是与放射科医生合作进行的,他们对胸部X光图像进行[公式:见正文]评估以识别肺部病变。我们与执行实际任务的领域专家进行的实验结果表明,当参与者由带有热图的可解释AI而不是黑箱AI提供支持时,任务绩效会提高。我们发现,作为决策辅助工具的可解释AI在制造业实验中比黑箱AI将任务绩效提高了7.7个百分点(95%置信区间[CI]:3.3%至12.0%,[公式:见正文]),在医学实验中提高了4.7个百分点(95%CI:1.1%至8.3%,[公式:见正文])。这些提升代表了任务绩效的显著改善。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b69/11681242/76889cee5433/41598_2024_82501_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b69/11681242/72659cb9638d/41598_2024_82501_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b69/11681242/bb0a75868840/41598_2024_82501_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b69/11681242/76889cee5433/41598_2024_82501_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b69/11681242/72659cb9638d/41598_2024_82501_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b69/11681242/bb0a75868840/41598_2024_82501_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5b69/11681242/76889cee5433/41598_2024_82501_Fig3_HTML.jpg

相似文献

1
Explainable AI improves task performance in human-AI collaboration.可解释人工智能在人机协作中提高任务绩效。
Sci Rep. 2024 Dec 28;14(1):31150. doi: 10.1038/s41598-024-82501-9.
2
Effect of Uncertainty-Aware AI Models on Pharmacists' Reaction Time and Decision-Making in a Web-Based Mock Medication Verification Task: Randomized Controlled Trial.不确定性感知人工智能模型对药剂师在基于网络的模拟药物验证任务中的反应时间和决策的影响:随机对照试验
JMIR Med Inform. 2025 Apr 18;13:e64902. doi: 10.2196/64902.
3
A Machine Learning Approach with Human-AI Collaboration for Automated Classification of Patient Safety Event Reports: Algorithm Development and Validation Study.一种人机协作的机器学习方法用于患者安全事件报告的自动分类:算法开发与验证研究
JMIR Hum Factors. 2024 Jan 25;11:e53378. doi: 10.2196/53378.
4
Explainability does not mitigate the negative impact of incorrect AI advice in a personnel selection task.可解释性并不能减轻错误的人工智能建议在人员选拔任务中的负面影响。
Sci Rep. 2024 Apr 28;14(1):9736. doi: 10.1038/s41598-024-60220-5.
5
Evaluating the clinical utility of artificial intelligence assistance and its explanation on the glioma grading task.评估人工智能辅助在胶质瘤分级任务中的临床实用性及其解释。
Artif Intell Med. 2024 Feb;148:102751. doi: 10.1016/j.artmed.2023.102751. Epub 2024 Jan 2.
6
Explainable deep learning diagnostic system for prediction of lung disease from medical images.基于深度学习的医学图像肺部疾病诊断系统的可解释性研究
Comput Biol Med. 2024 Mar;170:108012. doi: 10.1016/j.compbiomed.2024.108012. Epub 2024 Jan 19.
7
Explainable AI for Bioinformatics: Methods, Tools and Applications.可解释人工智能在生物信息学中的应用:方法、工具与应用。
Brief Bioinform. 2023 Sep 20;24(5). doi: 10.1093/bib/bbad236.
8
Towards full integration of explainable artificial intelligence in colon capsule endoscopy's pathway.迈向可解释人工智能在结肠胶囊内镜检查流程中的全面整合。
Sci Rep. 2025 Feb 18;15(1):5960. doi: 10.1038/s41598-025-89648-z.
9
Non-task expert physicians benefit from correct explainable AI advice when reviewing X-rays.非专业医师在查看 X 光片时受益于可解释 AI 的正确建议。
Sci Rep. 2023 Jan 25;13(1):1383. doi: 10.1038/s41598-023-28633-w.
10
Current status and future directions of explainable artificial intelligence in medical imaging.医学成像中可解释人工智能的现状与未来发展方向
Eur J Radiol. 2025 Feb;183:111884. doi: 10.1016/j.ejrad.2024.111884. Epub 2024 Dec 6.

引用本文的文献

1
Investigating the role of AI explanations in lay individuals' comprehension of radiology reports: A metacognition lens.从元认知视角探究人工智能解释在普通个体理解放射学报告中的作用
PLoS One. 2025 Sep 10;20(9):e0321342. doi: 10.1371/journal.pone.0321342. eCollection 2025.

本文引用的文献

1
Measuring the Impact of AI in the Diagnosis of Hospitalized Patients: A Randomized Clinical Vignette Survey Study.测量人工智能在住院患者诊断中的影响:一项随机临床病例调查研究。
JAMA. 2023 Dec 19;330(23):2275-2284. doi: 10.1001/jama.2023.22295.
2
Collaboration between explainable artificial intelligence and pulmonologists improves the accuracy of pulmonary function test interpretation.可解释人工智能与肺科医生的合作提高了肺功能测试解释的准确性。
Eur Respir J. 2023 May 18;61(5). doi: 10.1183/13993003.01720-2022. Print 2023 May.
3
Artificial-intelligence-based molecular classification of diffuse gliomas using rapid, label-free optical imaging.
基于人工智能的弥漫性神经胶质瘤快速无标记光成像分子分类。
Nat Med. 2023 Apr;29(4):828-832. doi: 10.1038/s41591-023-02252-4. Epub 2023 Mar 23.
4
Non-task expert physicians benefit from correct explainable AI advice when reviewing X-rays.非专业医师在查看 X 光片时受益于可解释 AI 的正确建议。
Sci Rep. 2023 Jan 25;13(1):1383. doi: 10.1038/s41598-023-28633-w.
5
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.停止为高风险决策解释黑箱机器学习模型,转而使用可解释模型。
Nat Mach Intell. 2019 May;1(5):206-215. doi: 10.1038/s42256-019-0048-x. Epub 2019 May 13.
6
Understanding, explaining, and utilizing medical artificial intelligence.理解、解释和利用医学人工智能。
Nat Hum Behav. 2021 Dec;5(12):1636-1642. doi: 10.1038/s41562-021-01146-0. Epub 2021 Jun 28.
7
A survey of clinicians on the use of artificial intelligence in ophthalmology, dermatology, radiology and radiation oncology.一项关于临床医生在眼科、皮肤科、放射科和放射肿瘤学中使用人工智能情况的调查。
Sci Rep. 2021 Mar 4;11(1):5193. doi: 10.1038/s41598-021-84698-5.
8
Transparent, explainable, and accountable AI for robotics.用于机器人技术的透明、可解释和可问责的人工智能。
Sci Robot. 2017 May 31;2(6). doi: 10.1126/scirobotics.aan6080.
9
XAI-Explainable artificial intelligence.可解释人工智能
Sci Robot. 2019 Dec 18;4(37). doi: 10.1126/scirobotics.aay7120.
10
Array programming with NumPy.使用 NumPy 进行数组编程。
Nature. 2020 Sep;585(7825):357-362. doi: 10.1038/s41586-020-2649-2. Epub 2020 Sep 16.