文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

医疗工作者对基于人工智能的临床决策支持系统的信任:系统评价

Trust in Artificial Intelligence-Based Clinical Decision Support Systems Among Health Care Workers: Systematic Review.

作者信息

Tun Hein Minn, Rahman Hanif Abdul, Naing Lin, Malik Owais Ahmed

机构信息

PAPRSB Institute of Health Sciences, Universiti Brunei Darussalam, Core Residential, Tower 4, Room 201A, UBDCorp, Jalan Tungku Link, Bandar Seri Begawan, BE1410, Brunei Darussalam, 673 7428942.

School of Digital Science, Universiti Brunei Darussalam, Bandar Seri Begawan, Brunei Darussalam.

出版信息

J Med Internet Res. 2025 Jul 29;27:e69678. doi: 10.2196/69678.


DOI:10.2196/69678
PMID:40772775
Abstract

BACKGROUND: Artificial intelligence-based clinical decision support systems (AI-CDSSs) have enhanced personalized medicine and improved the efficiency of health care workers. Despite these opportunities, trust in these tools remains a critical factor for their successful integration into practice. Existing research lacks synthesized insights and actionable recommendations to guide the development of AI-CDSSs that foster trust among health care workers. OBJECTIVE: This systematic review aims to identify and synthesize key factors that influence health care workers' trust in AI-CDSSs and to provide actionable recommendations for enhancing their trust in these systems. METHODS: We conducted a systematic review of published studies from January 2020 to November 2024, retrieved from PubMed, Scopus, and Google Scholar. Inclusion criteria focused on studies that examined health care workers' perceptions, experiences, and trust in AI-CDSSs. Studies in non-English languages and those unrelated to health care settings were excluded. Two independent reviewers followed the Cochrane Collaboration Handbook and PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) 2020 guidelines. Analysis was conducted using a developed data charter. The Critical Appraisal Skills Programme tool was applied to assess the quality of the included studies and to evaluate the risk of bias, ensuring a rigorous and systematic review process. RESULTS: A total of 27 studies met the inclusion criteria, involving diverse health care workers, predominantly in hospitalized settings. Qualitative methods were the most common (n=16, 59%), with sample sizes ranging from small focus groups to cohorts of over 1000 participants. Eight key themes emerged as pivotal in improving health care workers' trust in AI-CDSSs: (1) System Transparency, emphasizing the need for clear and interpretable AI; (2) Training and Familiarity, highlighting the importance of knowledge sharing and user education; (3) System Usability, focusing on effective integration into clinical workflows; (4) Clinical Reliability, addressing the consistency and accuracy of system performance; (5) Credibility and Validation, referring to how well the system performs across diverse clinical contexts; (6) Ethical Consideration, examining medicolegal liability, fairness, and adherence to ethical standards;(7) Human Centric Design, pioritizing patient centered approaches; (8) Customization and Control, highlighting the need to tailor tools to specific clinical needs while preserving health care providers' decision-making autonomy. Barriers to trust included algorithmic opacity, insufficient training, and ethical challenges, while enabling factors for health care workers' trust in AI-CDSS tools were transparency, usability, and clinical reliability. CONCLUSIONS: The findings highlight the need for explainable AI models, comprehensive training, stakeholder involvement, and human-centered design to foster health care workers' trust in AI-CDSSs. Although the heterogeneity of study designs and lack of specific data limit further analysis, this review bridges existing gaps by identifying key themes that support trust in AI-CDSSs. It also recommends that future research include diverse demographics, cross-cultural perspectives, and contextual differences in trust across various health care professions.

摘要

背景:基于人工智能的临床决策支持系统(AI-CDSSs)提升了个性化医疗水平,提高了医护人员的工作效率。尽管有这些机遇,但对这些工具的信任仍是其成功融入实践的关键因素。现有研究缺乏综合见解和可行建议,难以指导开发能增进医护人员信任的AI-CDSSs。 目的:本系统评价旨在识别并综合影响医护人员对AI-CDSSs信任的关键因素,并为增强他们对这些系统的信任提供可行建议。 方法:我们对2020年1月至2024年11月发表的研究进行了系统评价,这些研究从PubMed、Scopus和谷歌学术中检索获得。纳入标准侧重于考察医护人员对AI-CDSSs的认知、体验和信任的研究。非英语语言的研究以及与医疗环境无关的研究被排除。两名独立评审员遵循Cochrane协作手册和PRISMA(系统评价和Meta分析的首选报告项目)2020指南。使用制定的数据章程进行分析。应用批判性评估技能计划工具评估纳入研究的质量并评估偏倚风险,确保审查过程严谨且系统。 结果:共有27项研究符合纳入标准,涉及不同的医护人员,主要是住院环境中的医护人员。定性方法最为常见(n = 16,59%),样本量从小型焦点小组到超过1000名参与者的队列不等。八个关键主题在提高医护人员对AI-CDSSs的信任方面至关重要:(1)系统透明度,强调需要清晰且可解释的人工智能;(2)培训与熟悉度,突出知识共享和用户教育的重要性;(3)系统可用性,关注有效融入临床工作流程;(4)临床可靠性,解决系统性能的一致性和准确性问题;(5)可信度与验证,指系统在不同临床环境中的表现;(6)伦理考量,审视法医学责任、公平性以及对伦理标准的遵守情况;(7)以人为本的设计,优先考虑以患者为中心的方法;(8)定制与控制,强调在保持医护人员决策自主权的同时,根据特定临床需求定制工具的必要性。信任的障碍包括算法不透明、培训不足和伦理挑战,而医护人员对AI-CDSS工具信任的促成因素是透明度、可用性和临床可靠性。 结论:研究结果凸显了需要可解释的人工智能模型、全面培训、利益相关者参与和以人为本的设计,以增进医护人员对AI-CDSSs的信任。尽管研究设计的异质性和缺乏具体数据限制了进一步分析,但本评价通过识别支持对AI-CDSSs信任的关键主题弥合了现有差距。它还建议未来的研究纳入不同的人口统计学特征、跨文化视角以及不同医疗专业在信任方面的背景差异。

相似文献

[1]
Trust in Artificial Intelligence-Based Clinical Decision Support Systems Among Health Care Workers: Systematic Review.

J Med Internet Res. 2025-7-29

[2]
Designing Clinical Decision Support Systems (CDSS)-A User-Centered Lens of the Design Characteristics, Challenges, and Implications: Systematic Review.

J Med Internet Res. 2025-6-20

[3]
Accreditation through the eyes of nurse managers: an infinite staircase or a phenomenon that evaporates like water.

J Health Organ Manag. 2025-6-30

[4]
Health professionals' experience of teamwork education in acute hospital settings: a systematic review of qualitative literature.

JBI Database System Rev Implement Rep. 2016-4

[5]
Health Care Professionals' Experience of Using AI: Systematic Review With Narrative Synthesis.

J Med Internet Res. 2024-10-30

[6]
AI for IMPACTS Framework for Evaluating the Long-Term Real-World Impacts of AI-Powered Clinician Tools: Systematic Review and Narrative Synthesis.

J Med Internet Res. 2025-2-5

[7]
Improving AI-Based Clinical Decision Support Systems and Their Integration Into Care From the Perspective of Experts: Interview Study Among Different Stakeholders.

JMIR Med Inform. 2025-7-7

[8]
The Role of AI in Nursing Education and Practice: Umbrella Review.

J Med Internet Res. 2025-4-4

[9]
Stakeholders' perceptions and experiences of factors influencing the commissioning, delivery, and uptake of general health checks: a qualitative evidence synthesis.

Cochrane Database Syst Rev. 2025-3-20

[10]
Perspectives of Health Care Professionals on the Use of AI to Support Clinical Decision-Making in the Management of Multiple Long-Term Conditions: Interview Study.

J Med Internet Res. 2025-7-4

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索