文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

提高医学人工智能的可解释性和可整合性以促进医疗保健专业人员的接受和使用:混合系统评价

Improving Explainability and Integrability of Medical AI to Promote Health Care Professional Acceptance and Use: Mixed Systematic Review.

作者信息

Liu Yushu, Liu Chenxi, Zheng Jianing, Xu Chang, Wang Dan

机构信息

School of Medicine and Health Management, Huazhong University of Science and Technology, Wuhan, China.

Major Disciplinary Platform under Double First-Class Initiative for Liberal Arts at Huazhong University of Science and Technology (Research Center for High-Quality Development of Hospitals), Wuhan, China.

出版信息

J Med Internet Res. 2025 Aug 7;27:e73374. doi: 10.2196/73374.


DOI:10.2196/73374
PMID:40773743
Abstract

BACKGROUND: The integration of artificial intelligence (AI) in health care has significant potential, yet its acceptance by health care professionals (HCPs) is essential for successful implementation. Understanding HCPs' perspectives on the explainability and integrability of medical AI is crucial, as these factors influence their willingness to adopt and effectively use such technologies. OBJECTIVE: This study aims to improve the acceptance and use of medical AI. From a user perspective, it explores HCPs' understanding of the explainability and integrability of medical AI. METHODS: We performed a mixed systematic review by conducting a comprehensive search in the PubMed, Web of Science, Scopus, IEEE Xplore, and ACM Digital Library and arXiv databases for studies published between 2014 and 2024. Studies concerning an explanation or the integrability of medical AI were included. Study quality was assessed using the Joanna Briggs Institute critical appraisal checklist and Mixed Methods Appraisal Tool, with only medium- or high-quality studies included. Qualitative data were analyzed via thematic analysis, while quantitative findings were synthesized narratively. RESULTS: Out of 11,888 records initially retrieved, 22 (0.19%) studies met the inclusion criteria. All selected studies were published from 2020 onward, reflecting the recency and relevance of the topic. The majority (18/22, 82%) originated from high-income countries, and most (17/22, 77%) adopted qualitative methodologies, with the remainder (5/22, 23%) using quantitative or mixed method approaches. From the included studies, a conceptual framework was developed that delineates HCPs' perceptions of explainability and integrability. Regarding explainability, HCPs predominantly emphasized postprocessing explanations, particularly aspects of local explainability such as feature relevance and case-specific outputs. Visual tools that enhance the explainability of AI decisions (eg, heat maps and feature attribution) were frequently mentioned as important enablers of trust and acceptance. For integrability, key concerns included workflow adaptation, system compatibility with electronic health records, and overall ease of use. These aspects were consistently identified as primary conditions for real-world adoption. CONCLUSIONS: To foster wider adoption of AI in clinical settings, future system designs must center on the needs of HCPs. Enhancing post hoc explainability and ensuring seamless integration into existing workflows are critical to building trust and promoting sustained use. The proposed conceptual framework can serve as a practical guide for developers, researchers, and policy makers in aligning AI solutions with frontline user expectations. TRIAL REGISTRATION: PROSPERO CRD420250652253; https://www.crd.york.ac.uk/PROSPERO/view/CRD420250652253.

摘要

背景:人工智能(AI)在医疗保健领域的整合具有巨大潜力,然而医疗保健专业人员(HCPs)对其的接受对于成功实施至关重要。了解HCPs对医疗AI的可解释性和可整合性的看法至关重要,因为这些因素会影响他们采用和有效使用此类技术的意愿。 目的:本研究旨在提高医疗AI的接受度和使用率。从用户角度出发,探讨HCPs对医疗AI的可解释性和可整合性的理解。 方法:我们进行了一项混合系统评价,在PubMed、科学网、Scopus、IEEE Xplore、ACM数字图书馆和arXiv数据库中全面检索2014年至2024年发表的研究。纳入有关医疗AI的解释或可整合性的研究。使用乔安娜·布里格斯研究所的批判性评价清单和混合方法评价工具评估研究质量,仅纳入中高质量研究。定性数据通过主题分析进行分析,定量结果进行叙述性综合。 结果:在最初检索的11,888条记录中,22项(0.19%)研究符合纳入标准。所有选定研究均为2020年以后发表,反映了该主题的时效性和相关性。大多数(18/22,82%)来自高收入国家,大多数(17/22,77%)采用定性方法,其余(5/22,23%)采用定量或混合方法。从纳入的研究中,开发了一个概念框架,描绘了HCPs对可解释性和可整合性的看法。关于可解释性,HCPs主要强调后处理解释,特别是局部可解释性的方面,如特征相关性和特定案例输出。增强AI决策可解释性的视觉工具(如热图和特征归因)经常被提及是信任和接受的重要促成因素。对于可整合性,关键问题包括工作流程调整、系统与电子健康记录的兼容性以及总体易用性。这些方面一直被确定为实际应用的主要条件。 结论:为了促进AI在临床环境中的更广泛应用,未来的系统设计必须以HCPs的需求为中心。增强事后可解释性并确保无缝集成到现有工作流程中对于建立信任和促进持续使用至关重要。所提出的概念框架可为开发者、研究人员和政策制定者使AI解决方案符合一线用户期望提供实用指南。 试验注册:PROSPERO CRD420250652253;https://www.crd.york.ac.uk/PROSPERO/view/CRD420250652253

相似文献

[1]
Improving Explainability and Integrability of Medical AI to Promote Health Care Professional Acceptance and Use: Mixed Systematic Review.

J Med Internet Res. 2025-8-7

[2]
Perspectives of Health Care Professionals on the Use of AI to Support Clinical Decision-Making in the Management of Multiple Long-Term Conditions: Interview Study.

J Med Internet Res. 2025-7-4

[3]
Prescription of Controlled Substances: Benefits and Risks

2025-1

[4]
Health professionals' experience of teamwork education in acute hospital settings: a systematic review of qualitative literature.

JBI Database System Rev Implement Rep. 2016-4

[5]
Understanding factors influencing the implementation of medicine risk communications by healthcare professionals in clinical practice: a systematic review using the Theoretical Domains Framework.

Res Social Adm Pharm. 2024-2

[6]
Challenges and Opportunities for Data Sharing Related to Artificial Intelligence Tools in Health Care in Low- and Middle-Income Countries: Systematic Review and Case Study From Thailand.

J Med Internet Res. 2025-2-4

[7]
Implementing AI in Hospitals to Achieve a Learning Health System: Systematic Review of Current Enablers and Barriers.

J Med Internet Res. 2024-8-2

[8]
Perceptions of, Barriers to, and Facilitators of the Use of AI in Primary Care: Systematic Review of Qualitative Studies.

J Med Internet Res. 2025-6-25

[9]
Stakeholder Perspectives of Clinical Artificial Intelligence Implementation: Systematic Review of Qualitative Evidence.

J Med Internet Res. 2023-1-10

[10]
Health Care Professionals' Experiences and Opinions About Generative AI and Ambient Scribes in Clinical Documentation: Protocol for a Scoping Review.

JMIR Res Protoc. 2025-8-8

本文引用的文献

[1]
Large Language Model Influence on Diagnostic Reasoning: A Randomized Clinical Trial.

JAMA Netw Open. 2024-10-1

[2]
An Assessment of How Clinicians and Staff Members Use a Diabetes Artificial Intelligence Prediction Tool: Mixed Methods Study.

JMIR AI. 2023-5-29

[3]
The Role of AI in Hospitals and Clinics: Transforming Healthcare in the 21st Century.

Bioengineering (Basel). 2024-3-29

[4]
Evaluating Explanations From AI Algorithms for Clinical Decision-Making: A Social Science-Based Approach.

IEEE J Biomed Health Inform. 2024-7

[5]
Expectation management in AI: A framework for understanding stakeholder trust and acceptance of artificial intelligence systems.

Heliyon. 2024-3-25

[6]
Sustainable deployment of clinical prediction tools-a 360° approach to model maintenance.

J Am Med Inform Assoc. 2024-4-19

[7]
A multinational study on artificial intelligence adoption: Clinical implementers' perspectives.

Int J Med Inform. 2024-4

[8]
Radiologists' perspectives on the workflow integration of an artificial intelligence-based computer-aided detection system: A qualitative study.

Appl Ergon. 2024-5

[9]
Integration of AI in healthcare requires an interoperable digital data ecosystem.

Nat Med. 2024-3

[10]
Clinical Needs Assessment of a Machine Learning-Based Asthma Management Tool: User-Centered Design Approach.

JMIR Form Res. 2024-1-15

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索