• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过安全和不安全的可解释人工智能建议对医生行为的眼动追踪洞察。

Eye tracking insights into physician behaviour with safe and unsafe explainable AI recommendations.

作者信息

Nagendran Myura, Festor Paul, Komorowski Matthieu, Gordon Anthony C, Faisal Aldo A

机构信息

UKRI Centre for Doctoral Training in AI for Healthcare, Imperial College London, London, UK.

Division of Anaesthetics, Pain Medicine, and Intensive Care, Imperial College London, London, UK.

出版信息

NPJ Digit Med. 2024 Aug 2;7(1):202. doi: 10.1038/s41746-024-01200-x.

DOI:10.1038/s41746-024-01200-x
PMID:39095449
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11297294/
Abstract

We studied clinical AI-supported decision-making as an example of a high-stakes setting in which explainable AI (XAI) has been proposed as useful (by theoretically providing physicians with context for the AI suggestion and thereby helping them to reject unsafe AI recommendations). Here, we used objective neurobehavioural measures (eye-tracking) to see how physicians respond to XAI with N = 19 ICU physicians in a hospital's clinical simulation suite. Prescription decisions were made both pre- and post-reveal of either a safe or unsafe AI recommendation and four different types of simultaneously presented XAI. We used overt visual attention as a marker for where physician mental attention was directed during the simulations. Unsafe AI recommendations attracted significantly greater attention than safe AI recommendations. However, there was no appreciably higher level of attention placed onto any of the four types of explanation during unsafe AI scenarios (i.e. XAI did not appear to 'rescue' decision-makers). Furthermore, self-reported usefulness of explanations by physicians did not correlate with the level of attention they devoted to the explanations reinforcing the notion that using self-reports alone to evaluate XAI tools misses key aspects of the interaction behaviour between human and machine.

摘要

我们以临床人工智能支持的决策为例,研究了高风险环境中的情况,在这种环境下,可解释人工智能(XAI)被认为是有用的(理论上为医生提供人工智能建议的背景信息,从而帮助他们拒绝不安全的人工智能推荐)。在此,我们采用客观的神经行为测量方法(眼动追踪),以一家医院临床模拟套件中的19名重症监护室医生为对象,观察他们对可解释人工智能的反应。在揭示安全或不安全的人工智能推荐以及同时呈现的四种不同类型的可解释人工智能之前和之后,都要做出处方决策。我们将明显的视觉注意力作为模拟过程中医生心理注意力指向位置的一个指标。不安全的人工智能推荐比安全的人工智能推荐吸引了显著更多的注意力。然而,在不安全的人工智能场景中,对于四种解释类型中的任何一种,都没有明显更高的注意力水平(即可解释人工智能似乎没有“拯救”决策者)。此外,医生自我报告的解释有用性与他们对解释投入的注意力水平不相关,这强化了仅使用自我报告来评估可解释人工智能工具会忽略人机交互行为关键方面的观点。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30b8/11297294/390c909cb9ca/41746_2024_1200_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30b8/11297294/7d10b7621d58/41746_2024_1200_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30b8/11297294/d76e788633a0/41746_2024_1200_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30b8/11297294/53798f876dcb/41746_2024_1200_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30b8/11297294/f9c0988d286a/41746_2024_1200_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30b8/11297294/4d86849254f3/41746_2024_1200_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30b8/11297294/390c909cb9ca/41746_2024_1200_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30b8/11297294/7d10b7621d58/41746_2024_1200_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30b8/11297294/d76e788633a0/41746_2024_1200_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30b8/11297294/53798f876dcb/41746_2024_1200_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30b8/11297294/f9c0988d286a/41746_2024_1200_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30b8/11297294/4d86849254f3/41746_2024_1200_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/30b8/11297294/390c909cb9ca/41746_2024_1200_Fig6_HTML.jpg

相似文献

1
Eye tracking insights into physician behaviour with safe and unsafe explainable AI recommendations.通过安全和不安全的可解释人工智能建议对医生行为的眼动追踪洞察。
NPJ Digit Med. 2024 Aug 2;7(1):202. doi: 10.1038/s41746-024-01200-x.
2
Quantifying the impact of AI recommendations with explanations on prescription decision making.量化带有解释的人工智能推荐对处方决策的影响。
NPJ Digit Med. 2023 Nov 7;6(1):206. doi: 10.1038/s41746-023-00955-z.
3
A Survey on Medical Explainable AI (XAI): Recent Progress, Explainability Approach, Human Interaction and Scoring System.医学可解释人工智能(XAI)调查:最新进展、可解释性方法、人机交互和评分系统。
Sensors (Basel). 2022 Oct 21;22(20):8068. doi: 10.3390/s22208068.
4
Explanation strategies in humans versus current explainable artificial intelligence: Insights from image classification.人类的解释策略与当前可解释人工智能的比较:图像分类的见解。
Br J Psychol. 2024 Jun 10. doi: 10.1111/bjop.12714.
5
A review of evaluation approaches for explainable AI with applications in cardiology.用于可解释人工智能并应用于心脏病学的评估方法综述。
Artif Intell Rev. 2024;57(9):240. doi: 10.1007/s10462-024-10852-w. Epub 2024 Aug 9.
6
Translating theory into practice: assessing the privacy implications of concept-based explanations for biomedical AI.将理论转化为实践:评估基于概念的生物医学人工智能解释对隐私的影响。
Front Bioinform. 2023 Jul 5;3:1194993. doi: 10.3389/fbinf.2023.1194993. eCollection 2023.
7
Designing explainable AI to improve human-AI team performance: A medical stakeholder-driven scoping review.设计可解释的人工智能以提高人机团队绩效:医学利益相关者驱动的范围综述。
Artif Intell Med. 2024 Mar;149:102780. doi: 10.1016/j.artmed.2024.102780. Epub 2024 Jan 20.
8
Explainable AI in medical imaging: An overview for clinical practitioners - Saliency-based XAI approaches.可解释人工智能在医学影像中的应用:临床医师的概述——基于显著度的 XAI 方法。
Eur J Radiol. 2023 May;162:110787. doi: 10.1016/j.ejrad.2023.110787. Epub 2023 Mar 21.
9
AI and XAI second opinion: the danger of false confirmation in human-AI collaboration.人工智能与可解释人工智能的二次诊断意见:人机协作中错误确认的风险
J Med Ethics. 2025 May 21;51(6):396-399. doi: 10.1136/jme-2024-110074.
10
Explainable AI in medical imaging: An overview for clinical practitioners - Beyond saliency-based XAI approaches.医学成像中的可解释人工智能:临床从业者概述——超越基于显著性的可解释人工智能方法
Eur J Radiol. 2023 May;162:110786. doi: 10.1016/j.ejrad.2023.110786. Epub 2023 Mar 20.

引用本文的文献

1
Empirically derived evaluation requirements for responsible deployments of AI in safety-critical settings.针对安全关键环境中人工智能的负责任部署,基于经验得出的评估要求。
NPJ Digit Med. 2025 Jun 18;8(1):374. doi: 10.1038/s41746-025-01784-y.
2
Safety of human-AI cooperative decision-making within intensive care: A physical simulation study.重症监护中人类与人工智能协作决策的安全性:一项物理模拟研究。
PLOS Digit Health. 2025 Feb 24;4(2):e0000726. doi: 10.1371/journal.pdig.0000726. eCollection 2025 Feb.
3
Bridging human and machine intelligence: Reverse-engineering radiologist intentions for clinical trust and adoption.

本文引用的文献

1
Quantifying the impact of AI recommendations with explanations on prescription decision making.量化带有解释的人工智能推荐对处方决策的影响。
NPJ Digit Med. 2023 Nov 7;6(1):206. doi: 10.1038/s41746-023-00955-z.
2
Examining explainable clinical decision support systems with think aloud protocols.使用出声思维协议来检验可解释的临床决策支持系统。
PLoS One. 2023 Sep 14;18(9):e0291443. doi: 10.1371/journal.pone.0291443. eCollection 2023.
3
The Artificial Face (ART-F) Project: Addressing the Problem of Interpretability, Interface, and Trust in Artificial Intelligence.
架起人类与机器智能之间的桥梁:逆向工程放射科医生建立临床信任和实现应用的意图。
Comput Struct Biotechnol J. 2024 Nov 8;24:711-723. doi: 10.1016/j.csbj.2024.11.012. eCollection 2024 Dec.
人工脸(ART-F)项目:解决人工智能中的可解释性、界面和信任问题。
Cyberpsychol Behav Soc Netw. 2023 Apr;26(4):318-320. doi: 10.1089/cyber.2023.29273.ceu. Epub 2023 Mar 24.
4
Blink Rate Measured In Situ Decreases While Reading From Printed Text or Digital Devices, Regardless of Task Duration, Difficulty, or Viewing Distance.无论任务持续时间、难度或观看距离如何,从印刷文本或数字设备阅读时,现场测量的眨眼频率都会降低。
Invest Ophthalmol Vis Sci. 2023 Feb 1;64(2):14. doi: 10.1167/iovs.64.2.14.
5
Assuring the safety of AI-based clinical decision support systems: a case study of the AI Clinician for sepsis treatment.确保基于人工智能的临床决策支持系统的安全性:以 AI Clinician 治疗脓毒症为例
BMJ Health Care Inform. 2022 Jul;29(1). doi: 10.1136/bmjhci-2022-100549.
6
Measuring Cognition Load Using Eye-Tracking Parameters Based on Algorithm Description Tools.基于算法描述工具的眼动参数测量认知负荷。
Sensors (Basel). 2022 Jan 25;22(3):912. doi: 10.3390/s22030912.
7
Nudging within learning health systems: next generation decision support to improve cardiovascular care.推动学习型医疗保健系统发展:下一代决策支持系统以改善心血管护理。
Eur Heart J. 2022 Mar 31;43(13):1296-1306. doi: 10.1093/eurheartj/ehac030.
8
The false hope of current approaches to explainable artificial intelligence in health care.当前医疗保健中可解释人工智能方法的虚假希望。
Lancet Digit Health. 2021 Nov;3(11):e745-e750. doi: 10.1016/S2589-7500(21)00208-9.
9
Attitudes towards Trusting Artificial Intelligence Insights and Factors to Prevent the Passive Adherence of GPs: A Pilot Study.对信任人工智能见解的态度以及预防全科医生被动依从的因素:一项试点研究。
J Clin Med. 2021 Jul 14;10(14):3101. doi: 10.3390/jcm10143101.
10
Understanding, explaining, and utilizing medical artificial intelligence.理解、解释和利用医学人工智能。
Nat Hum Behav. 2021 Dec;5(12):1636-1642. doi: 10.1038/s41562-021-01146-0. Epub 2021 Jun 28.