• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

放射技师与人工智能的交互报告——不同形式的人工智能反馈如何影响信任和决策转换?

Reporting radiographers' interaction with Artificial Intelligence-How do different forms of AI feedback impact trust and decision switching?

作者信息

Rainey Clare, Bond Raymond, McConnell Jonathan, Hughes Ciara, Kumar Devinder, McFadden Sonyia

机构信息

Ulster University, School of Health Sciences, York St, Belfast, Northern Ireland.

Ulster University, School of Computing, York St, Belfast, Northern Ireland.

出版信息

PLOS Digit Health. 2024 Aug 7;3(8):e0000560. doi: 10.1371/journal.pdig.0000560. eCollection 2024 Aug.

DOI:10.1371/journal.pdig.0000560
PMID:39110687
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11305567/
Abstract

Artificial Intelligence (AI) has been increasingly integrated into healthcare settings, including the radiology department to aid radiographic image interpretation, including reporting by radiographers. Trust has been cited as a barrier to effective clinical implementation of AI. Appropriating trust will be important in the future with AI to ensure the ethical use of these systems for the benefit of the patient, clinician and health services. Means of explainable AI, such as heatmaps have been proposed to increase AI transparency and trust by elucidating which parts of image the AI 'focussed on' when making its decision. The aim of this novel study was to quantify the impact of different forms of AI feedback on the expert clinicians' trust. Whilst this study was conducted in the UK, it has potential international application and impact for AI interface design, either globally or in countries with similar cultural and/or economic status to the UK. A convolutional neural network was built for this study; trained, validated and tested on a publicly available dataset of MUsculoskeletal RAdiographs (MURA), with binary diagnoses and Gradient Class Activation Maps (GradCAM) as outputs. Reporting radiographers (n = 12) were recruited to this study from all four regions of the UK. Qualtrics was used to present each participant with a total of 18 complete examinations from the MURA test dataset (each examination contained more than one radiographic image). Participants were presented with the images first, images with heatmaps next and finally an AI binary diagnosis in a sequential order. Perception of trust in the AI systems was obtained following the presentation of each heatmap and binary feedback. The participants were asked to indicate whether they would change their mind (or decision switch) in response to the AI feedback. Participants disagreed with the AI heatmaps for the abnormal examinations 45.8% of the time and agreed with binary feedback on 86.7% of examinations (26/30 presentations).'Only two participants indicated that they would decision switch in response to all AI feedback (GradCAM and binary) (0.7%, n = 2) across all datasets. 22.2% (n = 32) of participants agreed with the localisation of pathology on the heatmap. The level of agreement with the GradCAM and binary diagnosis was found to be correlated with trust (GradCAM:-.515;-.584, significant large negative correlation at 0.01 level (p = < .01 and-.309;-.369, significant medium negative correlation at .01 level (p = < .01) for GradCAM and binary diagnosis respectively). This study shows that the extent of agreement with both AI binary diagnosis and heatmap is correlated with trust in AI for the participants in this study, where greater agreement with the form of AI feedback is associated with greater trust in AI, in particular in the heatmap form of AI feedback. Forms of explainable AI should be developed with cognisance of the need for precision and accuracy in localisation to promote appropriate trust in clinical end users.

摘要

人工智能(AI)已越来越多地融入医疗环境,包括放射科,以辅助X光影像解读,包括放射技师撰写报告。信任被认为是人工智能在临床有效应用的障碍。未来,合理利用信任对于人工智能至关重要,以确保这些系统出于患者、临床医生和医疗服务的利益而得到合乎道德的使用。有人提出利用可解释人工智能的手段,如图像热图,通过阐明人工智能在做出决策时“关注”图像的哪些部分来提高人工智能的透明度和可信度。这项新研究的目的是量化不同形式的人工智能反馈对专家临床医生信任度的影响。虽然这项研究是在英国进行的,但它在全球或与英国文化和/或经济状况相似的国家,对人工智能界面设计具有潜在的国际应用价值和影响。本研究构建了一个卷积神经网络;在公开可用的肌肉骨骼X光片(MURA)数据集上进行训练、验证和测试,输出为二元诊断和梯度类激活映射(GradCAM)。从英国的四个地区招募了12名撰写报告的放射技师参与本研究。使用Qualtrics向每位参与者展示来自MURA测试数据集的总共18次完整检查(每次检查包含不止一张X光图像)。首先向参与者展示图像,接着展示带有热图的图像,最后按顺序展示人工智能的二元诊断结果。在每次展示热图和二元反馈后,获取参与者对人工智能系统的信任感知。参与者被要求指出他们是否会根据人工智能的反馈改变主意(或决策转变)。在异常检查中,参与者有45.8%的时间不同意人工智能热图,而在86.7%的检查(26/30次展示)中同意二元反馈。在所有数据集中,只有两名参与者表示他们会根据所有人工智能反馈(GradCAM和二元反馈)进行决策转变(0.7%,n = 2)。22.2%(n = 32)的参与者同意热图上病理定位。发现与GradCAM和二元诊断的一致程度与信任相关(GradCAM:-.515;-.584,在0.01水平上具有显著的大负相关(p = <.01),GradCAM和二元诊断分别为-.309;-.369,在0.01水平上具有显著的中等负相关(p = <.01))。这项研究表明,在本研究的参与者中,与人工智能二元诊断和热图的一致程度与对人工智能的信任相关,其中与人工智能反馈形式的更高一致性与对人工智能的更大信任相关,特别是在人工智能热图反馈形式中。应在认识到定位精度和准确性需求的情况下开发可解释人工智能形式,以促进临床最终用户的适当信任。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1da6/11305567/b5049a73cf9e/pdig.0000560.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1da6/11305567/cce3231d5a70/pdig.0000560.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1da6/11305567/861131487d3d/pdig.0000560.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1da6/11305567/bcfb176ab7f5/pdig.0000560.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1da6/11305567/bd4f4018dd27/pdig.0000560.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1da6/11305567/6ad6f31e92e9/pdig.0000560.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1da6/11305567/e97875e35b7f/pdig.0000560.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1da6/11305567/c786fd94f829/pdig.0000560.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1da6/11305567/65aa8578296b/pdig.0000560.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1da6/11305567/d156e0338e7a/pdig.0000560.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1da6/11305567/b5049a73cf9e/pdig.0000560.g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1da6/11305567/cce3231d5a70/pdig.0000560.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1da6/11305567/861131487d3d/pdig.0000560.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1da6/11305567/bcfb176ab7f5/pdig.0000560.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1da6/11305567/bd4f4018dd27/pdig.0000560.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1da6/11305567/6ad6f31e92e9/pdig.0000560.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1da6/11305567/e97875e35b7f/pdig.0000560.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1da6/11305567/c786fd94f829/pdig.0000560.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1da6/11305567/65aa8578296b/pdig.0000560.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1da6/11305567/d156e0338e7a/pdig.0000560.g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1da6/11305567/b5049a73cf9e/pdig.0000560.g010.jpg

相似文献

1
Reporting radiographers' interaction with Artificial Intelligence-How do different forms of AI feedback impact trust and decision switching?放射技师与人工智能的交互报告——不同形式的人工智能反馈如何影响信任和决策转换?
PLOS Digit Health. 2024 Aug 7;3(8):e0000560. doi: 10.1371/journal.pdig.0000560. eCollection 2024 Aug.
2
UK reporting radiographers' perceptions of AI in radiographic image interpretation - Current perspectives and future developments.英国报告放射技师对放射影像解释中人工智能的看法——当前的观点和未来的发展。
Radiography (Lond). 2022 Nov;28(4):881-888. doi: 10.1016/j.radi.2022.06.006. Epub 2022 Jul 1.
3
Navigating the ethical landscape of artificial intelligence in radiography: a cross-sectional study of radiographers' perspectives.医学影像学中人工智能伦理问题的探索:放射技师观点的横断面研究。
BMC Med Ethics. 2024 May 11;25(1):52. doi: 10.1186/s12910-024-01052-w.
4
An experimental machine learning study investigating the decision-making process of students and qualified radiographers when interpreting radiographic images.一项实验性机器学习研究,旨在调查学生和合格放射技师在解读X光图像时的决策过程。
PLOS Digit Health. 2023 Oct 25;2(10):e0000229. doi: 10.1371/journal.pdig.0000229. eCollection 2023 Oct.
5
Explainable AI decision support improves accuracy during telehealth strep throat screening.可解释人工智能决策支持可提高远程医疗链球菌性喉炎筛查的准确性。
Commun Med (Lond). 2024 Jul 24;4(1):149. doi: 10.1038/s43856-024-00568-x.
6
Artificial Intelligence: Guidance for clinical imaging and therapeutic radiography professionals, a summary by the Society of Radiographers AI working group.人工智能:放射技师协会人工智能工作组的临床影像和治疗放射学专业人员指南摘要。
Radiography (Lond). 2021 Nov;27(4):1192-1202. doi: 10.1016/j.radi.2021.07.028. Epub 2021 Aug 20.
7
An insight into the current perceptions of UK radiographers on the future impact of AI on the profession: A cross-sectional survey.对英国放射技师对人工智能未来对该专业影响的看法的深入了解:一项横断面调查。
J Med Imaging Radiat Sci. 2022 Sep;53(3):347-361. doi: 10.1016/j.jmir.2022.05.010. Epub 2022 Jun 15.
8
Singapore radiographers' perceptions and expectations of artificial intelligence - A qualitative study.新加坡放射技师对人工智能的看法和期望——一项定性研究。
J Med Imaging Radiat Sci. 2022 Dec;53(4):554-563. doi: 10.1016/j.jmir.2022.08.005. Epub 2022 Sep 15.
9
Beauty Is in the AI of the Beholder: Are We Ready for the Clinical Integration of Artificial Intelligence in Radiography? An Exploratory Analysis of Perceived AI Knowledge, Skills, Confidence, and Education Perspectives of UK Radiographers.美在观察者的人工智能之中:我们是否准备好将人工智能临床整合到放射成像中?对英国放射技师的人工智能知识、技能、信心和教育观点的探索性分析。
Front Digit Health. 2021 Nov 11;3:739327. doi: 10.3389/fdgth.2021.739327. eCollection 2021.
10
Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas.在流行地区,服用抗叶酸抗疟药物的人群中,叶酸补充剂与疟疾易感性和严重程度的关系。
Cochrane Database Syst Rev. 2022 Feb 1;2(2022):CD014217. doi: 10.1002/14651858.CD014217.

引用本文的文献

1
Artificial intelligence and radiographer preliminary image evaluation: What might the future hold for radiographers providing x-ray interpretation in the acute setting?人工智能与放射技师的初步影像评估:在急性情况下提供X光解读的放射技师的未来会是怎样?
J Med Radiat Sci. 2024 Dec;71(4):495-498. doi: 10.1002/jmrs.821. Epub 2024 Sep 20.

本文引用的文献

1
Measuring the Impact of AI in the Diagnosis of Hospitalized Patients: A Randomized Clinical Vignette Survey Study.测量人工智能在住院患者诊断中的影响:一项随机临床病例调查研究。
JAMA. 2023 Dec 19;330(23):2275-2284. doi: 10.1001/jama.2023.22295.
2
UK reporting radiographers' perceptions of AI in radiographic image interpretation - Current perspectives and future developments.英国报告放射技师对放射影像解释中人工智能的看法——当前的观点和未来的发展。
Radiography (Lond). 2022 Nov;28(4):881-888. doi: 10.1016/j.radi.2022.06.006. Epub 2022 Jul 1.
3
Patient apprehensions about the use of artificial intelligence in healthcare.
患者对医疗保健中使用人工智能的担忧。
NPJ Digit Med. 2021 Sep 21;4(1):140. doi: 10.1038/s41746-021-00509-1.
4
Do as AI say: susceptibility in deployment of clinical decision-aids.按照人工智能所说的去做:临床决策辅助工具部署中的易感性。
NPJ Digit Med. 2021 Feb 19;4(1):31. doi: 10.1038/s41746-021-00385-9.
5
Artificial intelligence for good health: a scoping review of the ethics literature.促进健康的人工智能:伦理文献的范围综述
BMC Med Ethics. 2021 Feb 15;22(1):14. doi: 10.1186/s12910-021-00577-8.
6
Detecting Asymmetric Patterns and Localizing Cancers on Mammograms.在乳房X光片上检测不对称模式并定位癌症。
Patterns (N Y). 2020 Oct 9;1(7). doi: 10.1016/j.patter.2020.100106. Epub 2020 Sep 21.
7
Position paper on COVID-19 imaging and AI: From the clinical needs and technological challenges to initial AI solutions at the lab and national level towards a new era for AI in healthcare.关于 COVID-19 影像学和人工智能的立场文件:从临床需求和技术挑战,到实验室和国家层面的初始人工智能解决方案,再到人工智能在医疗保健领域的新时代。
Med Image Anal. 2020 Dec;66:101800. doi: 10.1016/j.media.2020.101800. Epub 2020 Aug 19.
8
Detection and localization of distal radius fractures: Deep learning system versus radiologists.桡骨远端骨折的检测与定位:深度学习系统与放射科医生的比较
Eur J Radiol. 2020 May;126:108925. doi: 10.1016/j.ejrad.2020.108925. Epub 2020 Mar 9.
9
Ethics of Artificial Intelligence in Radiology: Summary of the Joint European and North American Multisociety Statement.人工智能在放射学中的伦理问题:欧洲与北美多学会联合声明概要。
Radiology. 2019 Nov;293(2):436-440. doi: 10.1148/radiol.2019191586. Epub 2019 Oct 1.
10
Deep learning predicts hip fracture using confounding patient and healthcare variables.深度学习利用混杂的患者和医疗保健变量预测髋部骨折。
NPJ Digit Med. 2019 Apr 30;2:31. doi: 10.1038/s41746-019-0105-1. eCollection 2019.