• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Trust criteria for artificial intelligence in health: normative and epistemic considerations.人工智能在健康领域的信任标准:规范和认知考虑。
J Med Ethics. 2024 Jul 23;50(8):544-551. doi: 10.1136/jme-2023-109338.
2
Artificial Intelligence to support ethical decision-making for incapacitated patients: a survey among German anesthesiologists and internists.人工智能支持失能患者的伦理决策:德国麻醉师和内科医生的调查。
BMC Med Ethics. 2024 Jul 18;25(1):78. doi: 10.1186/s12910-024-01079-z.
3
AI-driven decision support systems and epistemic reliance: a qualitative study on obstetricians' and midwives' perspectives on integrating AI-driven CTG into clinical decision making.人工智能驱动的决策支持系统和认知依赖:一项关于妇产科医生和助产士如何将人工智能驱动的 CTG 纳入临床决策的定性研究。
BMC Med Ethics. 2024 Jan 6;25(1):6. doi: 10.1186/s12910-023-00990-1.
4
Artificial intelligence (AI) and machine learning (ML) based decision support systems in mental health: An integrative review.人工智能(AI)和机器学习(ML)在心理健康中的决策支持系统:综合评价。
Int J Ment Health Nurs. 2023 Aug;32(4):966-978. doi: 10.1111/inm.13114. Epub 2023 Feb 6.
5
Algor-ethics: charting the ethical path for AI in critical care.算法伦理:为重症监护中的人工智能规划伦理路径。
J Clin Monit Comput. 2024 Aug;38(4):931-939. doi: 10.1007/s10877-024-01157-y. Epub 2024 Apr 4.
6
"I don't think people are ready to trust these algorithms at face value": trust and the use of machine learning algorithms in the diagnosis of rare disease.“我认为人们还没有准备好完全信任这些算法”:信任和机器学习算法在罕见病诊断中的应用。
BMC Med Ethics. 2022 Nov 16;23(1):112. doi: 10.1186/s12910-022-00842-4.
7
Large Language Models and User Trust: Consequence of Self-Referential Learning Loop and the Deskilling of Health Care Professionals.大语言模型与用户信任:自我参照学习循环的后果及医疗保健专业人员的技能退化
J Med Internet Res. 2024 Apr 25;26:e56764. doi: 10.2196/56764.
8
Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey.人群对医疗人工智能性能和可解释性的偏好:基于选择的联合调查。
J Med Internet Res. 2021 Dec 13;23(12):e26611. doi: 10.2196/26611.
9
AI-Assisted Decision-Making in Long-Term Care: Qualitative Study on Prerequisites for Responsible Innovation.人工智能在长期护理中的决策辅助:负责任创新的先决条件定性研究。
JMIR Nurs. 2024 Jul 25;7:e55962. doi: 10.2196/55962.
10
How Bioethics Can Shape Artificial Intelligence and Machine Learning.生物伦理学如何塑造人工智能和机器学习
Hastings Cent Rep. 2018 Sep;48(5):10-13. doi: 10.1002/hast.895.

引用本文的文献

1
Integrating New Technologies in Lipidology: A Comprehensive Review.脂质学中新技术的整合:全面综述
J Clin Med. 2025 Jul 14;14(14):4984. doi: 10.3390/jcm14144984.
2
A multi-site study of clinician perspectives in the lifecycle of an algorithmic risk prediction tool.一项关于算法风险预测工具生命周期中临床医生观点的多中心研究。
SSM Qual Res Health. 2025 Jun;7:100562. doi: 10.1016/j.ssmqr.2025.100562. Epub 2025 Apr 25.
3
Patient information needs for transparent and trustworthy cardiovascular artificial intelligence: A qualitative study.透明且可信的心血管人工智能所需的患者信息:一项定性研究。
PLOS Digit Health. 2025 Apr 21;4(4):e0000826. doi: 10.1371/journal.pdig.0000826. eCollection 2025 Apr.
4
Artificial Intelligence in Orthodontics: Concerns, Conjectures, and Ethical Dilemmas.正畸学中的人工智能:担忧、推测与伦理困境
Int Dent J. 2025 Feb;75(1):20-22. doi: 10.1016/j.identj.2024.11.002. Epub 2024 Nov 26.
5
Machine learning-based prediction models in medical decision-making in kidney disease: patient, caregiver, and clinician perspectives on trust and appropriate use.基于机器学习的预测模型在肾脏疾病医疗决策中的应用:患者、护理人员及临床医生对信任及合理使用的看法
J Am Med Inform Assoc. 2025 Jan 1;32(1):51-62. doi: 10.1093/jamia/ocae255.
6
Patient Consent and The Right to Notice and Explanation of AI Systems Used in Health Care.患者同意以及医疗保健中使用的人工智能系统的告知与解释权。
Am J Bioeth. 2025 Mar;25(3):102-114. doi: 10.1080/15265161.2024.2399828. Epub 2024 Sep 17.

本文引用的文献

1
Artificial Hallucinations in ChatGPT: Implications in Scientific Writing.ChatGPT中的人工幻觉:对科学写作的影响
Cureus. 2023 Feb 19;15(2):e35179. doi: 10.7759/cureus.35179. eCollection 2023 Feb.
2
Explainable artificial intelligence for mental health through transparency and interpretability for understandability.通过透明度和可解释性实现心理健康的可解释人工智能,以提高可理解性。
NPJ Digit Med. 2023 Jan 18;6(1):6. doi: 10.1038/s41746-023-00751-9.
3
AI in the hands of imperfect users.不完美的用户手中的人工智能。
NPJ Digit Med. 2022 Dec 28;5(1):197. doi: 10.1038/s41746-022-00737-z.
4
A manifesto on explainability for artificial intelligence in medicine.人工智能在医学中的可解释性宣言
Artif Intell Med. 2022 Nov;133:102423. doi: 10.1016/j.artmed.2022.102423. Epub 2022 Oct 9.
5
Mitigating Racial Bias in Machine Learning.减轻机器学习中的种族偏见。
J Law Med Ethics. 2022;50(1):92-100. doi: 10.1017/jme.2022.13.
6
Beware explanations from AI in health care.在医疗保健领域,要警惕来自人工智能的解释。
Science. 2021 Jul 16;373(6552):284-286. doi: 10.1126/science.abg1834.
7
Do as AI say: susceptibility in deployment of clinical decision-aids.按照人工智能所说的去做:临床决策辅助工具部署中的易感性。
NPJ Digit Med. 2021 Feb 19;4(1):31. doi: 10.1038/s41746-021-00385-9.
8
In AI We Trust: Ethics, Artificial Intelligence, and Reliability.深信人工智能:伦理、人工智能与可靠性。
Sci Eng Ethics. 2020 Oct;26(5):2749-2767. doi: 10.1007/s11948-020-00228-y. Epub 2020 Jun 10.
9
Using Nudges to Enhance Clinicians' Implementation of Shared Decision Making With Patient Decision Aids.运用助推手段促进临床医生借助患者决策辅助工具实施共同决策。
MDM Policy Pract. 2020 Apr 26;5(1):2381468320915906. doi: 10.1177/2381468320915906. eCollection 2020 Jan-Jun.
10
The need for a system view to regulate artificial intelligence/machine learning-based software as medical device.需要一种系统观点来将基于人工智能/机器学习的软件作为医疗设备进行监管。
NPJ Digit Med. 2020 Apr 7;3:53. doi: 10.1038/s41746-020-0262-2. eCollection 2020.

人工智能在健康领域的信任标准:规范和认知考虑。

Trust criteria for artificial intelligence in health: normative and epistemic considerations.

机构信息

Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, Texas, USA

Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, Texas, USA.

出版信息

J Med Ethics. 2024 Jul 23;50(8):544-551. doi: 10.1136/jme-2023-109338.

DOI:10.1136/jme-2023-109338
PMID:37979976
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11101592/
Abstract

Rapid advancements in artificial intelligence and machine learning (AI/ML) in healthcare raise pressing questions about how much users should trust AI/ML systems, particularly for high stakes clinical decision-making. Ensuring that user trust is properly calibrated to a tool's computational capacities and limitations has both practical and ethical implications, given that overtrust or undertrust can influence over-reliance or under-reliance on algorithmic tools, with significant implications for patient safety and health outcomes. It is, thus, important to better understand how variability in trust criteria across stakeholders, settings, tools and use cases may influence approaches to using AI/ML tools in real settings. As part of a 5-year, multi-institutional Agency for Health Care Research and Quality-funded study, we identify trust criteria for a survival prediction algorithm intended to support clinical decision-making for left ventricular assist device therapy, using semistructured interviews (n=40) with patients and physicians, analysed via thematic analysis. Findings suggest that physicians and patients share similar empirical considerations for trust, which were primarily in nature, focused on accuracy and validity of AI/ML estimates. Trust evaluations considered the nature, integrity and relevance of training data rather than the computational nature of algorithms themselves, suggesting a need to distinguish 'source' from 'functional' explainability. To a lesser extent, trust criteria were also relational (endorsement from others) and sometimes based on personal beliefs and experience. We discuss implications for promoting appropriate and responsible trust calibration for clinical decision-making use AI/ML.

摘要

人工智能和机器学习 (AI/ML) 在医疗保健领域的快速发展引发了一个紧迫的问题,即用户应该在多大程度上信任 AI/ML 系统,特别是在涉及高风险临床决策的情况下。确保用户信任与工具的计算能力和限制相匹配,具有实际和道德方面的影响,因为过度信任或不信任可能会影响对算法工具的过度依赖或依赖不足,这对患者安全和健康结果有重大影响。因此,重要的是要更好地了解利益相关者、环境、工具和用例之间信任标准的差异如何影响在实际环境中使用 AI/ML 工具的方法。作为一项为期五年、多机构的美国卫生保健研究与质量署资助研究的一部分,我们使用与患者和医生的半结构化访谈(n=40),通过主题分析,确定了用于支持左心室辅助设备治疗临床决策的生存预测算法的信任标准。研究结果表明,医生和患者对信任有相似的经验考虑因素,这些因素主要是基于 AI/ML 估计的准确性和有效性的自然因素。信任评估考虑了训练数据的性质、完整性和相关性,而不是算法本身的计算性质,这表明需要区分“来源”和“功能”可解释性。在较小程度上,信任标准也是关系性的(来自他人的认可),有时基于个人信仰和经验。我们讨论了为促进 AI/ML 用于临床决策制定的适当和负责任的信任校准所带来的影响。