• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

从元认知视角探究人工智能解释在普通个体理解放射学报告中的作用

Investigating the role of AI explanations in lay individuals' comprehension of radiology reports: A metacognition lens.

作者信息

Genc Yegin, Ahsen Mehmet Eren, Zhang Zhan

机构信息

Seidenberg School of Computer Science and Information Systems, Pace University, New York, New York, United States of America.

Department of Business Administration, University of Illinois at Urbana-Champaign, Champaign, Illinois, United States of America.

出版信息

PLoS One. 2025 Sep 10;20(9):e0321342. doi: 10.1371/journal.pone.0321342. eCollection 2025.

DOI:10.1371/journal.pone.0321342
PMID:40929105
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12422517/
Abstract

While there has been extensive research on techniques for explainable artificial intelligence (XAI) to enhance AI recommendations, the metacognitive processes in interacting with AI explanations remain underexplored. This study examines how AI explanations impact human decision-making by leveraging cognitive mechanisms that evaluate the accuracy of AI recommendations. We conducted a large-scale experiment (N = 4,302) on Amazon Mechanical Turk (AMT), where participants classified radiology reports as normal or abnormal. Participants were randomly assigned to three groups: a) no AI input (control group), b) AI prediction only, and c) AI prediction with explanation. Our results indicate that AI explanations enhanced task performance. Our results indicate that explanations are more effective when AI prediction confidence is high or users' self-confidence is low. We conclude by discussing the implications of our findings.

摘要

虽然已经对可解释人工智能(XAI)技术进行了广泛研究以增强人工智能推荐,但与人工智能解释交互过程中的元认知过程仍未得到充分探索。本研究通过利用评估人工智能推荐准确性的认知机制,考察人工智能解释如何影响人类决策。我们在亚马逊土耳其机器人(AMT)上进行了一项大规模实验(N = 4302),参与者将放射学报告分类为正常或异常。参与者被随机分为三组:a)无人工智能输入(对照组),b)仅人工智能预测,c)带解释的人工智能预测。我们的结果表明,人工智能解释提高了任务表现。我们的结果表明,当人工智能预测置信度高或用户自信心低时,解释更有效。我们通过讨论研究结果的意义来得出结论。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0003/12422517/64231216a834/pone.0321342.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0003/12422517/7ab8ef424766/pone.0321342.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0003/12422517/360b188b061c/pone.0321342.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0003/12422517/64231216a834/pone.0321342.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0003/12422517/7ab8ef424766/pone.0321342.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0003/12422517/360b188b061c/pone.0321342.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0003/12422517/64231216a834/pone.0321342.g003.jpg

相似文献

1
Investigating the role of AI explanations in lay individuals' comprehension of radiology reports: A metacognition lens.从元认知视角探究人工智能解释在普通个体理解放射学报告中的作用
PLoS One. 2025 Sep 10;20(9):e0321342. doi: 10.1371/journal.pone.0321342. eCollection 2025.
2
Transdiagnostic compulsivity is associated with reduced reminder setting, only partially attributable to overconfidence.跨诊断强迫性与提醒设置减少有关,这仅部分归因于过度自信。
Elife. 2025 May 29;13:RP98114. doi: 10.7554/eLife.98114.
3
Artificial intelligence for detecting keratoconus.人工智能在圆锥角膜检测中的应用。
Cochrane Database Syst Rev. 2023 Nov 15;11(11):CD014911. doi: 10.1002/14651858.CD014911.pub2.
4
Radiology artificial intelligence: a systematic review and evaluation of methods (RAISE).放射学人工智能:方法的系统评价和评估(RAISE)。
Eur Radiol. 2022 Nov;32(11):7998-8007. doi: 10.1007/s00330-022-08784-6. Epub 2022 Apr 14.
5
Signs and symptoms to determine if a patient presenting in primary care or hospital outpatient settings has COVID-19.在基层医疗机构或医院门诊环境中,如果患者出现以下症状和体征,可判断其是否患有 COVID-19。
Cochrane Database Syst Rev. 2022 May 20;5(5):CD013665. doi: 10.1002/14651858.CD013665.pub3.
6
Improving Patient Communication by Simplifying AI-Generated Dental Radiology Reports With ChatGPT: Comparative Study.通过使用ChatGPT简化人工智能生成的牙科放射学报告来改善患者沟通:比较研究
J Med Internet Res. 2025 Jun 9;27:e73337. doi: 10.2196/73337.
7
Clinical domain knowledge-derived template improves post hoc AI explanations in pneumothorax classification.临床领域知识衍生模板可提高气胸分类事后人工智能解释的质量。
J Biomed Inform. 2024 Aug;156:104673. doi: 10.1016/j.jbi.2024.104673. Epub 2024 Jun 9.
8
The Effect of Workload and Task Priority on Multitasking Performance and Reliance on Level 1 Explainable AI (XAI) Use.工作量和任务优先级对多任务处理性能以及对一级可解释人工智能(XAI)使用的依赖的影响。
Hum Factors. 2025 Sep;67(9):897-915. doi: 10.1177/00187208251323478. Epub 2025 Mar 12.
9
Evaluation of a Deep Learning and XAI based Facial Phenotyping Tool for Genetic Syndromes: A Clinical User Study.基于深度学习和可解释人工智能的遗传综合征面部表型分析工具评估:一项临床用户研究
medRxiv. 2025 Jun 9:2025.06.08.25328588. doi: 10.1101/2025.06.08.25328588.
10
Prescription of Controlled Substances: Benefits and Risks管制药品的处方:益处与风险

本文引用的文献

1
Explainable AI improves task performance in human-AI collaboration.可解释人工智能在人机协作中提高任务绩效。
Sci Rep. 2024 Dec 28;14(1):31150. doi: 10.1038/s41598-024-82501-9.
2
Developing trustworthy artificial intelligence: insights from research on interpersonal, human-automation, and human-AI trust.开发值得信赖的人工智能:来自人际信任、人机自动化信任和人类与人工智能信任研究的见解。
Front Psychol. 2024 Apr 17;15:1382693. doi: 10.3389/fpsyg.2024.1382693. eCollection 2024.
3
Essential properties and explanation effectiveness of explainable artificial intelligence in healthcare: A systematic review.
可解释人工智能在医疗保健中的基本属性和解释效果:一项系统综述。
Heliyon. 2023 May 8;9(5):e16110. doi: 10.1016/j.heliyon.2023.e16110. eCollection 2023 May.
4
Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011-2022).可解释人工智能在医疗保健中的应用:过去十年(2011-2022 年)的系统回顾。
Comput Methods Programs Biomed. 2022 Nov;226:107161. doi: 10.1016/j.cmpb.2022.107161. Epub 2022 Sep 27.
5
Principles and Practice of Explainable Machine Learning.可解释机器学习原理与实践
Front Big Data. 2021 Jul 1;4:688969. doi: 10.3389/fdata.2021.688969. eCollection 2021.
6
XAI-Explainable artificial intelligence.可解释人工智能
Sci Robot. 2019 Dec 18;4(37). doi: 10.1126/scirobotics.aay7120.
7
A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI.可解释人工智能(XAI)研究综述:迈向医学 XAI
IEEE Trans Neural Netw Learn Syst. 2021 Nov;32(11):4793-4813. doi: 10.1109/TNNLS.2020.3027314. Epub 2021 Oct 27.
8
Explainable AI meets persuasiveness: Translating reasoning results into behavioral change advice.可解释人工智能与说服力:将推理结果转化为行为改变建议。
Artif Intell Med. 2020 May;105:101840. doi: 10.1016/j.artmed.2020.101840. Epub 2020 Mar 5.
9
GkmExplain: fast and accurate interpretation of nonlinear gapped k-mer SVMs.GkmExplain:快速准确地解释非线性缺口 k-mer SVM。
Bioinformatics. 2019 Jul 15;35(14):i173-i182. doi: 10.1093/bioinformatics/btz322.
10
Why Does Advice Discounting Occur? The Combined Roles of Confidence and Trust.为什么会出现建议折扣现象?信心与信任的综合作用。
Front Psychol. 2018 Nov 28;9:2381. doi: 10.3389/fpsyg.2018.02381. eCollection 2018.