Tun Hein Minn, Rahman Hanif Abdul, Naing Lin, Malik Owais Ahmed
PAPRSB Institute of Health Sciences, Universiti Brunei Darussalam, Core Residential, Tower 4, Room 201A, UBDCorp, Jalan Tungku Link, Bandar Seri Begawan, BE1410, Brunei Darussalam, 673 7428942.
School of Digital Science, Universiti Brunei Darussalam, Bandar Seri Begawan, Brunei Darussalam.
J Med Internet Res. 2025 Jul 29;27:e69678. doi: 10.2196/69678.
BACKGROUND: Artificial intelligence-based clinical decision support systems (AI-CDSSs) have enhanced personalized medicine and improved the efficiency of health care workers. Despite these opportunities, trust in these tools remains a critical factor for their successful integration into practice. Existing research lacks synthesized insights and actionable recommendations to guide the development of AI-CDSSs that foster trust among health care workers. OBJECTIVE: This systematic review aims to identify and synthesize key factors that influence health care workers' trust in AI-CDSSs and to provide actionable recommendations for enhancing their trust in these systems. METHODS: We conducted a systematic review of published studies from January 2020 to November 2024, retrieved from PubMed, Scopus, and Google Scholar. Inclusion criteria focused on studies that examined health care workers' perceptions, experiences, and trust in AI-CDSSs. Studies in non-English languages and those unrelated to health care settings were excluded. Two independent reviewers followed the Cochrane Collaboration Handbook and PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) 2020 guidelines. Analysis was conducted using a developed data charter. The Critical Appraisal Skills Programme tool was applied to assess the quality of the included studies and to evaluate the risk of bias, ensuring a rigorous and systematic review process. RESULTS: A total of 27 studies met the inclusion criteria, involving diverse health care workers, predominantly in hospitalized settings. Qualitative methods were the most common (n=16, 59%), with sample sizes ranging from small focus groups to cohorts of over 1000 participants. Eight key themes emerged as pivotal in improving health care workers' trust in AI-CDSSs: (1) System Transparency, emphasizing the need for clear and interpretable AI; (2) Training and Familiarity, highlighting the importance of knowledge sharing and user education; (3) System Usability, focusing on effective integration into clinical workflows; (4) Clinical Reliability, addressing the consistency and accuracy of system performance; (5) Credibility and Validation, referring to how well the system performs across diverse clinical contexts; (6) Ethical Consideration, examining medicolegal liability, fairness, and adherence to ethical standards;(7) Human Centric Design, pioritizing patient centered approaches; (8) Customization and Control, highlighting the need to tailor tools to specific clinical needs while preserving health care providers' decision-making autonomy. Barriers to trust included algorithmic opacity, insufficient training, and ethical challenges, while enabling factors for health care workers' trust in AI-CDSS tools were transparency, usability, and clinical reliability. CONCLUSIONS: The findings highlight the need for explainable AI models, comprehensive training, stakeholder involvement, and human-centered design to foster health care workers' trust in AI-CDSSs. Although the heterogeneity of study designs and lack of specific data limit further analysis, this review bridges existing gaps by identifying key themes that support trust in AI-CDSSs. It also recommends that future research include diverse demographics, cross-cultural perspectives, and contextual differences in trust across various health care professions.
背景:基于人工智能的临床决策支持系统(AI-CDSSs)提升了个性化医疗水平,提高了医护人员的工作效率。尽管有这些机遇,但对这些工具的信任仍是其成功融入实践的关键因素。现有研究缺乏综合见解和可行建议,难以指导开发能增进医护人员信任的AI-CDSSs。 目的:本系统评价旨在识别并综合影响医护人员对AI-CDSSs信任的关键因素,并为增强他们对这些系统的信任提供可行建议。 方法:我们对2020年1月至2024年11月发表的研究进行了系统评价,这些研究从PubMed、Scopus和谷歌学术中检索获得。纳入标准侧重于考察医护人员对AI-CDSSs的认知、体验和信任的研究。非英语语言的研究以及与医疗环境无关的研究被排除。两名独立评审员遵循Cochrane协作手册和PRISMA(系统评价和Meta分析的首选报告项目)2020指南。使用制定的数据章程进行分析。应用批判性评估技能计划工具评估纳入研究的质量并评估偏倚风险,确保审查过程严谨且系统。 结果:共有27项研究符合纳入标准,涉及不同的医护人员,主要是住院环境中的医护人员。定性方法最为常见(n = 16,59%),样本量从小型焦点小组到超过1000名参与者的队列不等。八个关键主题在提高医护人员对AI-CDSSs的信任方面至关重要:(1)系统透明度,强调需要清晰且可解释的人工智能;(2)培训与熟悉度,突出知识共享和用户教育的重要性;(3)系统可用性,关注有效融入临床工作流程;(4)临床可靠性,解决系统性能的一致性和准确性问题;(5)可信度与验证,指系统在不同临床环境中的表现;(6)伦理考量,审视法医学责任、公平性以及对伦理标准的遵守情况;(7)以人为本的设计,优先考虑以患者为中心的方法;(8)定制与控制,强调在保持医护人员决策自主权的同时,根据特定临床需求定制工具的必要性。信任的障碍包括算法不透明、培训不足和伦理挑战,而医护人员对AI-CDSS工具信任的促成因素是透明度、可用性和临床可靠性。 结论:研究结果凸显了需要可解释的人工智能模型、全面培训、利益相关者参与和以人为本的设计,以增进医护人员对AI-CDSSs的信任。尽管研究设计的异质性和缺乏具体数据限制了进一步分析,但本评价通过识别支持对AI-CDSSs信任的关键主题弥合了现有差距。它还建议未来的研究纳入不同的人口统计学特征、跨文化视角以及不同医疗专业在信任方面的背景差异。
J Health Organ Manag. 2025-6-30
JBI Database System Rev Implement Rep. 2016-4
J Med Internet Res. 2024-10-30
J Med Internet Res. 2025-4-4